Polar visualization for non-360 deg data with interpolation - python

I am trying (hard) to find an efficient way to visualize range-bearing data in a fan-shaped image.
So far I have come up with the method shown below using OpenCV's cv2.linearPolar function. However, since my data is only for a sensor opening angle of 120 deg, and not a full circle, it requires some hacks that are not very efficient in order to get the correct results from cv2.linearPolar. I.e. I need to insert empty blocks of data to represent a 360 deg image, in order to transform it, and then crop it afterwards to obtain only the region I am interested in.
Calculating the direct coordinates and mapping from A-->B results in non-interpolated, empty value between each "bearing-line" and does not look visually nice.
Since this need to run in real-time from a fast sensor, I would be interested in learning about another way to achieve this. Perhaps using OpenGL or something else.
Current ineffective method that obtains the wanted result:
import numpy as np
import cv2
import matplotlib.pyplot as plt
image = np.random.randint(256, size=(256,1000),dtype=np.uint8)
n_beams, n_ranges = image.shape[:2]
opening_angle_deg = 120
angle_res_deg = opening_angle_deg/n_beams
start_v = np.zeros((int(opening_angle_deg/angle_res_deg),n_ranges),dtype=np.uint8)
end_v = np.zeros((int((360-opening_angle_deg*2)/angle_res_deg),n_ranges),dtype=np.uint8)
polar = cv2.vconcat([start_v,image])
polar = cv2.vconcat([polar,end_v])
center = (n_ranges,polar.shape[0]/2)
if n_beams > n_ranges:
dst = cv2.linearPolar(polar,(center),polar.shape[1]/2,cv2.INTER_LINEAR+cv2.WARP_INVERSE_MAP + cv2.WARP_FILL_OUTLIERS)
x_max = np.ceil(np.sin(np.deg2rad(opening_angle_deg/2))*polar.shape[1]/2)*2
dst = dst [int(center[1]-x_max/2):int(center[1]+x_max/2),int(dst.shape[1]-polar.shape[1]/2):]
else:
dst = cv2.linearPolar(polar,(center),polar.shape[0]/2,cv2.INTER_LINEAR+cv2.WARP_INVERSE_MAP + cv2.WARP_FILL_OUTLIERS)
x_max = np.ceil(np.sin(np.deg2rad(opening_angle_deg/2))*polar.shape[0]/2)*2
dst = dst [int(center[1]-x_max/2):int(center[1]+x_max/2),int(dst.shape[1]-polar.shape[0]/2):]
dst = cv2.transpose(dst)
plt.imshow(dst)

Related

I keep getting blank images while trying to Georeference an image with GDAL

#!/usr/bin/env python3
import numpy as np
from osgeo import gdal
from osgeo import osr
# Load an array with shape (197, 250, 3)
# Data with dim of 3 contain (value, longitude, latitude)
data = np.load("data.npy")
# Copy the data and coordinates
array = data[:,:,0]
lon = data[:,:,1]
lat = data[:,:,2]
nLons = array.shape[1]
nLats = array.shape[0]
# Calculate the geotransform parameters
maxLon, minLon, maxLat, minLat = [lon.max(), lon.min(), lat.max(), lat.min()]
resLon = (maxLon - minLon) / nLons
resLat = (maxLat - minLat) / nLats
# Get the transform
geotransform = (minLon, resLon, 0, maxLat, 0, -resLat)
# Create the ouptut raster
output_raster = gdal.GetDriverByName('GTiff').Create('myRaster.tif', nLons, nLats, 1,
gdal.GDT_Int32)
# Set the geotransform
output_raster.SetGeoTransform(geotransform)
srs = osr.SpatialReference()
# Set to world projection 4326
srs.ImportFromEPSG(4326)
output_raster.SetProjection(srs.ExportToWkt())
output_raster.GetRasterBand(1).WriteArray(array)
output_raster.FlushCache()
The code above is meant to georeference a raster using GDAL but returns blank tiff files. I have vetted the data and variables, I, however, suspect the problem could be from geotransform variables. The documentation demands the variable to be:
top-left-x, w-e-pixel-resolution, 0,
top-left-y, 0, n-s-pixel-resolution (negative value)
I have used lats and lons not sure I'm getting which one corresponds to x and which to y. It could be something else but I'm not quite sure.
Overall your approach looks correct to me, but it's hard to tell without seeing the data you're using, but here are some points to consider:
First, there's a difference between the output file being empty, and/or being in the wrong location, georeferencing relates only to the latter.
When working interactive, you should also make sure to properly close the Dataset using output_raster = None, that will also trigger flushing for you.
You could start by testing if GDAL reads the same data that you intended to write. Using something like:
ds = gdal.Open('myRaster.tif')
data_from_disk = ds.ReadAsArray()
ds = None
np.testing.assert_array_equal(data_from_disk, array)
If those are not identical, it could be an issue with the datatype. Like writing floats close to 0 as integers, causing them to clip to 0 giving the appearance of an "empty" file.
Regarding the georeferencing, the projection you use has the coordinates in degrees. If yours are in radians your output ends up close to null-island.
Your approach also assumes that the data and lat/lon arrays are on a regular grid (having a constant resolution). That might not be the case (especially if the data comes with a 2D grid of coordinates).
Often when coordinate arrays are given, they are defined as valid for the center of the pixel. Compared to GDAL's geotransform which is defined for the (outer) edge of the pixel. So you might need to account for that by subtracting half the resolution. And this also impacts your calculation of the resolution, which in the case for the center-definition should probably use / (nLons-1) & / (nLats-1). Or alternatively verify with:
# for a regular grid
resLon = lon[0,1] - lon[0,0]
resLat = lat[1,0] - lat[0,0]
When I run your snippet with some dummy data, it gives me a correct output (ignoring the center/edge issue mentioned above).
lat, lon = np.mgrid[89:-90:-2, -179:180:2]
array = np.sqrt(lon**2 + lat**2).astype(np.int32)

Rotating a reshaped image as a matrix operation

I have a gray scale image that I want to rotate. However, I need to do optimization on it. Therefore, I cannot use pillow or opencv.
I want to reshape this image using python with numpy.reshape into an one dimensional vector (where I use the default settings C-style reshape).
And thereafter, I want to rotate this image around a point using matrix multiplication and addition, i.e. it should be something like
rotated_image_vector = A # vector + b # (or the equivalent in homogenious coordinates).
After this operation I want to reshape the outcome back to two dimensions and have the rotated image.
It would be best if it would as well use linear interpolation between the pixels that do not fit exactly to an other pixel.
The mathematical theory tells it is possible, and I believe there is a very elegant solution to this problem, but I do not see how to create this matrix. Did anyone already have this problem or sees an immediate solution?
Thanks a lot,
Eike
I like your approach but there is a slight misconception in it. What you want to transform are not the pixel values themselves but the coordinates. So you don't reshape your image but rather do a np.indices on it to obtain coordinates to each pixel. For those a rotation around a point looks like
rotation_matrix#(coordinates-fixed_point)+fixed_point
except that I have to transpose a bit to get the dimensions to align. The cove below is a slight adoption of my code in this answer.
As an example I am going to use the Wikipedia-logo-v2 by Nohat. It is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
First I read in the picture, swap x and y axis to not get mad and rotate the coordinates as described above.
import numpy as np
import matplotlib.pyplot as plt
import itertools
image = plt.imread('wikipedia.jpg')
image = np.swapaxes(image,0,1)/255
fixed_point = np.array(image.shape[:2], dtype='float')/2
points = np.moveaxis(np.indices(image.shape[:2]),0,-1).reshape(-1,2)
a = 2*np.pi/8
A = np.array([[np.cos(a),-np.sin(a)],[np.sin(a),np.cos(a)]])
rotated_coordinates = (A#(points-fixed_point.reshape(1,2)).T).T+fixed_point.reshape(1,2)
Now I set up a little class to interpolate between the pixels that do not fit exactly to an other pixel. And finally I swap the axis back and plot it.
class Image_knn():
def fit(self, image):
self.image = image.astype('float')
def predict(self, x, y):
image = self.image
weights_x = [(1-(x % 1)).reshape(*x.shape,1), (x % 1).reshape(*x.shape,1)]
weights_y = [(1-(y % 1)).reshape(*x.shape,1), (y % 1).reshape(*x.shape,1)]
start_x = np.floor(x)
start_y = np.floor(y)
return sum([image[np.clip(np.floor(start_x + x), 0, image.shape[0]-1).astype('int'),
np.clip(np.floor(start_y + y), 0, image.shape[1]-1).astype('int')] * weights_x[x]*weights_y[y]
for x,y in itertools.product(range(2),range(2))])
image_model = Image_knn()
image_model.fit(image)
transformed_image = image_model.predict(*rotated_coordinates.T).reshape(*image.shape)
plt.imshow(np.swapaxes(transformed_image,0,1))
And I get a result like this
Possible Issue
The artifact in the bottom left that looks like one needs to clean the screen comes from the following problem: When we rotate it can happen that we don't have enough pixels to paint the lower left. What we do by default in image_knn is to clip the coordinates to an area where we have information. That means when we ask image knn for pixels coming from outside the image it gives us the pixels at the boundary of the image. This looks good if there is a background but if an object touches the edge of the picture it looks odd like here. Just something to keep in mind when using this.
Thank you for your answer!
But actually it is not a misconception that you could let this roation be represented by a matrix multiplication with the reshaped vector.
I used your code to generate such a matrix (its surely not the most efficient way but it works, most likely you see a more efficient implementation immediately XD. You see I really need it as a matix multiplication :-D).
What I basically did is to generate the representation matrix of the linear transformation, by computing how every of the 100*100 basis images (i.e. the image with zeros everywhere und a one) is mapped by your transformation.
import sys
import numpy as np
import matplotlib.pyplot as plt
import itertools
angle = 2*np.pi/6
image_expl = plt.imread('wikipedia.jpg')
image_expl = image_expl[:,:,0]
plt.imshow(image_expl)
plt.title("Image")
plt.show()
image_shape = image_expl.shape
pixel_number = image_shape[0]*image_shape[1]
rot_mat = np.zeros((pixel_number,pixel_number))
for i in range(pixel_number):
vector = np.zeros(pixel_number)
vector[i] = 1
image = vector.reshape(*image_shape)
fixed_point = np.array(image.shape, dtype='float')/2
points = np.moveaxis(np.indices(image.shape),0,-1).reshape(-1,2)
a = -angle
A = np.array([[np.cos(a),-np.sin(a)],[np.sin(a),np.cos(a)]])
rotated_coordinates = (A#(points-fixed_point.reshape(1,2)).T).T+fixed_point.reshape(1,2)
x,y = rotated_coordinates.T
image = image.astype('float')
weights_x = [(1-(x % 1)).reshape(*x.shape), (x % 1).reshape(*x.shape)]
weights_y = [(1-(y % 1)).reshape(*x.shape), (y % 1).reshape(*x.shape)]
start_x = np.floor(x)
start_y = np.floor(y)
transformed_image_returned = sum([image[np.clip(np.floor(start_x + x), 0, image.shape[0]-1).astype('int'),
np.clip(np.floor(start_y + y), 0, image.shape[1]-1).astype('int')] * weights_x[x]*weights_y[y]
for x,y in itertools.product(range(2),range(2))])
rot_mat[:,i] = transformed_image_returned
if i%100 == 0: print(int(100*i/pixel_number), "% finisched")
plt.imshow((rot_mat # image_expl.reshape(-1)).reshape(image_shape))
Thank you again :-)

SimpleITK Resize images

I have a set o 3D volumes that I am reading with SimpleITK
import SimpleITK as sitk
for filename in filenames:
image = sitk.ReadImage(filename)
Each of the volumes has different size, spacing, origin and direction. This code yields different values for different images:
print(image.GetSize())
print(image.GetOrigin())
print(image.GetSpacing())
print(image.GetDirection())
My question is: how do I transform the images to have the same size and spacing so that they all have the same resolution and size when converted to numpy arrays. Something like:
import SimpleITK as sitk
for filename in filenames:
image = sitk.ReadImage(filename)
image = transform(image, fixed_size, fixed_spacing)
array = sitk.GetArrayFromImage(image)
The way to do this is to use the Resample function with fixed/arbitrary size and spacing. Below is a code snippet showing construction of this "reference_image" space:
reference_origin = np.zeros(dimension)
reference_direction = np.identity(dimension).flatten()
reference_size = [128]*dimension # Arbitrary sizes, smallest size that yields desired results.
reference_spacing = [ phys_sz/(sz-1) for sz,phys_sz in zip(reference_size, reference_physical_size) ]
reference_image = sitk.Image(reference_size, data[0].GetPixelIDValue())
reference_image.SetOrigin(reference_origin)
reference_image.SetSpacing(reference_spacing)
reference_image.SetDirection(reference_direction)
For a turnkey solution have a look at this Jupyter notebook which illustrates how to do data augmentation with variable sized images in SimpleITK (code above is from the notebook). You may find the other notebooks from the SimpleITK notebook repository of use too.
According to SimpleITK's documentation, the process of image resampling involves 4 steps:
Image - the image we resample, given in the coordinate system;
Resampling grid - a regular grid of points given in a coordinate system which will be mapped to the coordinate system;
Transformation - maps points from the coordinate system to coordinate system;
Interpolator - a method for obtaining the intensity values at arbitrary points in the coordinate system from the values of the points defined by the Image
The following snippet is for downsampling the image preserving its coordinate system properties:
def downsamplePatient(patient_CT, resize_factor):
original_CT = sitk.ReadImage(patient_CT,sitk.sitkInt32)
dimension = original_CT.GetDimension()
reference_physical_size = np.zeros(original_CT.GetDimension())
reference_physical_size[:] = [(sz-1)*spc if sz*spc>mx else mx for sz,spc,mx in zip(original_CT.GetSize(), original_CT.GetSpacing(), reference_physical_size)]
reference_origin = original_CT.GetOrigin()
reference_direction = original_CT.GetDirection()
reference_size = [round(sz/resize_factor) for sz in original_CT.GetSize()]
reference_spacing = [ phys_sz/(sz-1) for sz,phys_sz in zip(reference_size, reference_physical_size) ]
reference_image = sitk.Image(reference_size, original_CT.GetPixelIDValue())
reference_image.SetOrigin(reference_origin)
reference_image.SetSpacing(reference_spacing)
reference_image.SetDirection(reference_direction)
reference_center = np.array(reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize())/2.0))
transform = sitk.AffineTransform(dimension)
transform.SetMatrix(original_CT.GetDirection())
transform.SetTranslation(np.array(original_CT.GetOrigin()) - reference_origin)
centering_transform = sitk.TranslationTransform(dimension)
img_center = np.array(original_CT.TransformContinuousIndexToPhysicalPoint(np.array(original_CT.GetSize())/2.0))
centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))
centered_transform = sitk.Transform(transform)
centered_transform.AddTransform(centering_transform)
# sitk.Show(sitk.Resample(original_CT, reference_image, centered_transform, sitk.sitkLinear, 0.0))
return sitk.Resample(original_CT, reference_image, centered_transform, sitk.sitkLinear, 0.0)
Using the snippet above in a brain CT scan we get:

Splicing image array (FITS file) using coordinates from header

I am trying to splice a fits array based on the latitudes provided from the Header. However, I cannot seem to do so with my knowledge of Python and the documentation of astropy. The code I have is something like this:
from astropy.io import fits
import numpy as np
Wise1 = fits.open('Image1.fits')
im1 = Wise1[0].data
im1 = np.where(im1 > *latitude1, 0, im1)
newhdu = fits.PrimaryHDU(im1)
newhdulist = fits.HDUList([newhdu])
newhdulist.writeto('1b1_Bg_Removed_2.fits')
Here latitude1 would be a value in degrees, recognized after being called from the header. So there are two things I need to accomplish:
How to call the header to recognize Galactic Latitudes?
Splice the array in such a way that it only contains values for the range of latitudes, with everything else being 0.
I think by "splice" you mean "cut out" or "crop", based on the example you've shown.
astropy.nddata has a routine for world-coordinate-system-based (i.e., lat/lon or ra/dec) cutouts
However, in the simple case you're dealing with, you just need the coordinates of each pixel. Do this by making a WCS:
from astropy import wcs
w = wcs.WCS(Wise1[0].header)
xx,yy = np.indices(im.shape)
lon,lat = w.wcs_pix2world(xx,yy,0)
newim = im[lat > my_lowest_latitude]
But if you want to preserve the header information, you're much better off using the cutout tool, since you then do not have to manually manage this.
from astropy.nddata import Cutout2D
from astropy import coordinates
from astropy import units as u
# example coordinate - you'll have to figure one out that's in your map
center = coordinates.SkyCoord(mylon*u.deg, mylat*u.deg, frame='fk5')
# then make an array cutout
co = nddata.Cutout2D(im, center, size=[0.1,0.2]*u.arcmin, wcs=w)
# create a new FITS HDU
hdu = fits.PrimaryHDU(data=co.data, header=co.wcs.to_header())
# write to disk
hdu.writeto('cropped_file.fits')
An example use case is in the astropy documentation.

How to separate two regions of latlon data on python

I am currently working with BUFR files with wind data. When I read this file on python I get 4 large vectors, latitude vector, longitude vector, wind_direction vector, and wind_speed vector.
Both wind vectors are masked python arrays because there is non-valid data. This happens because the data comes from a non-geostationary satellite. In fact I successfully generated the following image from this BUFR file to show you the general shape that the data takes.
In this image I have plotted a color field to represent the wind speed, while the arrows obviously represent the wind direction.
Please notice the two bands of actual data. Unfortunately the way I am plotting the data, generates a third band (where the color field is smooth), in-between the actual data bands. This is an artefact of the function pcolormesh. If I could superimpose two `pcolormesh plots, each one representing one of the bands, this problem would disappear.
Unfortunately, I do not know how I could separate the data "regions". I have thought about clustering techniques but do not know how to cluster along latlon data using ANOTHER array (the wind data) as the clustering rule.
This is my current code:
#!/usr/bin/python
import bufr
import numpy as np
import sys
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
from matplotlib import mlab
WIND_DIR_INDEX = 97
WIND_SPEED_INDEX = 96
bfrfile = sys.argv[1]
print bfrfile
bfr = bufr.BUFRFile(bfrfile)
lon = []
lat = []
wind_d = []
wind_s = []
for record in bfr:
for entry in record:
if entry.index == WIND_DIR_INDEX:
wind_d.append(entry.data)
if entry.index == WIND_SPEED_INDEX:
wind_s.append(entry.data)
if entry.name.find("LONGITUDE") == 0:
lon.append(entry.data)
if entry.name.find("LATITUDE") == 0:
lat.append(entry.data)
lons = np.concatenate(lon)
lats = np.concatenate(lat)
winds_d = np.concatenate(wind_d)
winds_s = np.concatenate(wind_s)
winds_d = np.ma.masked_greater(winds_d,1.0e+6)
winds_s = np.ma.masked_greater(winds_s,1.0e+6)
windu = np.cos((winds_d-180)*(np.pi/180))
windv = np.sin((winds_d-180)*(np.pi/180))
# Data interpolation for pcolormesh (needs gridded data)
xi = np.linspace(lons.min(),lons.max(),lons.size/10)
yi = np.linspace(lats.min(),lats.max(),lats.size/10)
Z = mlab.griddata(lons,lats,winds_s,xi,yi)
X,Y = np.meshgrid(xi,yi)
mydpi = 96
fig = plt.figure(frameon=True)
fig.set_size_inches(1600/mydpi,1200/mydpi)
ax = plt.Axes(fig,[0,0,1,1])
#ax.set_axis_off()
fig.add_axes(ax)
plt.hold(True);
plt.quiver(lons[::5],lats[::5],windu[::5],windv[::5],linewidths=0)
for method in (ax.set_xticks,ax.set_xticklabels,ax.set_yticks,ax.set_yticklabels):
method([])
fig.savefig('/home/cendas/bin/python/bufr_ascat.png',bbox_inches=0,dpi=5*mydpi)
mydpi = 96
fig = plt.figure(frameon=True)
fig.set_size_inches(1600/mydpi,1200/mydpi)
ax = plt.Axes(fig,[0,0,1,1])
#ax.set_axis_off()
fig.add_axes(ax)
plt.hold(True);
try:
plt.pcolormesh(X,Y,Z,alpha=None)
plt.clim(0,10)
except ValueError:
pass
print "Warning: Empty data array."
for method in (ax.set_xticks,ax.set_xticklabels,ax.set_yticks,ax.set_yticklabels):
method([])
fig.savefig('/home/cendas/bin/python/bufr_ascat_color.png',bbox_inches=0,dpi=5*mydpi)
I then usually follow this python code with the following terminal commands to combine the images:
convert bufr_ascat.png -transparent white bufr_ascat.png
convert bufr_ascat_color.png -transparent white bufr_ascat_color.png
composite bufr_ascat.png bufr_ascat_color.png bufrascat.png
Don't abuse clustering for this.
What you need is a simple selection / filtering; not a structure discovery process.
Choose the mean of the masked data. All non-masked data left of that mean is the left part, all non-masked data on the right is the other?
Clustering is the wrong tool for this task.

Categories

Resources