How to calculate Google Earth polygon size in pixels - python

OS X 10.7.5, Python 2.7, GE 7.1.2.2041
I have some .kml data that includes a moderately large number of polygons. Each polygon has an image associated with it. I want to use each image in a <GroundOverlay> mode with its associated polygon.
The raw images are all a bit larger than the polygons. I can easily resize the images with Python's Image Library (PIL), but the amount of missizing is not consistent across the entire set. Some are as good as only ~5% larger, and some go up to ~20% larger.
What I would like to do is either find (or calculate) the approximate sizes of the polygons in pixels so that I can automate the resizing of their associated images with that data.
Any suggestions?

You could use the width and height of the polygon/rectangle in longitude and latitude coordinates. Then use the ratio of that rectangle to the image size and it should fit.
Edit: I should note that depending on where your images are going to show up you might need some special math for the dateline (-180 to 180) or prime-meridian (0 to 360).

Related

Python: correcting offset of points on google maps (due to satellite image tilt)

I'm using the google maps static api to get top view satellite images of objects of which I have the surface coordinates (LoD1 / LoD2).
the points are always slightly off, I think this is due to a small tilt in the satellite image itself (is it a correct assumption?).
For example in this image I have the building shape, but the points are slightly off. Is there a way to correct this for all objects?
The red markers are the standard google-maps api pointers, the center of the original image (here it is cropped) is the center of the building, and the white line is a cv2.polyline implementation of the object shape.
Just shifting by n pixels will not help since the offset depends on the angle between the satellite and object and the shape of that object.
I am using the pyproj library to transform the coordinates, and then convert the coordinates to pixel values (by setting the center point as the center pixel value, and having the difference in the coordinate space, one can calculate the edge-points pixel values too).
So - the good news is that there is no need to "correct" this for all objects, because there is no way to do that without using 3d models & textures.
Google (or most map platforms for that matter) don't actually use satellite images, they use aeroplane images. The planes don't fly directly over the top of every building (imagine how tight/redundant their flight path would be if they did!).
Instead, the plane will take an image from some kind of angle, and then, through the wonders of photogrammetric processing, the images are all corrected and ortho-rectified so the ground surface is in the right place everywhere.
What can't (and shouldn't) be corrected in a 2d image is the location of objects above ground height. Like the roof in your image. For a more extreme example, just look at a skyscraper, and you'll realise you can't ever get the pixels correct above the ground:
https://goo.gl/maps/4tLSrd7yXQYWZPTy7

How to efficiently design a system that stores image data corresponding to a coordinate system?

I am building a device that uses a motorized stage and camera to raster scans a sample and store their images, for downstream visualization or perception tasks.
I have attached an image for illustration. (in the image, red and yellow squares are images that map the sample area. Red square maps smaller area, and yellow square maps a larger area)
The stage has its own coordinate system (black dots), which can be mapped to the image data, as each image will have variable number of black dots (>1) in them depending on their magnification (red or yellow dots).
I have been wondering how I could design a system where I'll be able to store these images. My stage coordinate system extends from -50,000 to + 50,000 with a stepsize of 0.1, so it wouldn't be useful to create a reference array of 500k * 500k size to map each point to any pixels that might belong to those points.
I'm trying to do this in python.
There are well-known techniques for addressing such problems.
Define an underlying spatial coordinate system in mm
or some other convenient unit of measure.
Define three (invertible) functions (black, red, yellow)
that will convert back and forth between spatial coords and pixel coords.
It sounds like linear functions will suffice.
If there is noise in your measurements you might find that discretizing
by setting low order bits to zero is convenient.
You will take multiple photos and store them in a filesystem.
Base the filename on the spatial coordinate of the center pixel.
Compose the name in this way:
Let x_bits be spatial X coordinate, with MSB first.
Similarly for y_bits.
Let filename be alternating X and Y bits,
so e.g. the coord (0, 3) at two-bit resolution becomes "0101".
Turn groups of four bits into hex nybbles,
and treat early parts of filename as directory names,
as you find convenient.
Now at query time, nearby images of different resolutions all appear together.
Alternatively, use arbitrary filenames,
and store coordinate + filename in a postgres PostGIS table.
Then geometry queries like ST_Distance or ST_Within
will efficiently retrieve relevant images,
using a technique similar to the "interleaved bits" quadtree approach I outlined above.

libvips Nearest Neighbor / Bicubic Deep Zoom Pyramid Creation

I'm in the process of moving some of my code off of openzoom.py and onto Libvips but am not sure how dictate the interpolation method, which is important. I at the very least need to be able to use bicubic/bilinear in one case and nearest neighbors in the other case.
My old code is as follows:
creator = deepzoom.ImageCreator(tile_size=128, tile_overlap=2, tile_format="png",
image_quality=0.8, resize_filter="nearest")
creator.create(sourceFile, destFile)
Currently, using pyvips I have the following
image = pyvips.Image.new_from_file(sourceFile)
image.dzsave(destFile, tile_size=128, overlap=2,
suffix='.png[Q=80]')
Any help would be greatly appreciated :)
By default, dzsave will average each 2x2 block of pixels, which is equivalent to bilinear.
Sometimes, for example with images where pixel values represent labels rather than intensity, you need a non-interpolatory downsize. For these cases, you can use the region_shrink parameter to select median or mode, which will both preserve label values.
I would use:
image = pyvips.Image.new_from_file(sourceFile, access='sequential')
image.dzsave(destFile,
overlap=1,
tile_size=126,
region_shrink='mode',
suffix='.png')
Don't forget to set the access hint. It'll give you a huge improvement in speed and memory behaviour for large images that don't support random access.
The PNG Q number sets quantization quality when outputting palettized images. Perhaps you mean compression? libvips defaults to 6, the PNG standard.
Are you sure you want overlap=2? The deepzoom standard is overlap 1. Overlap 1 means there is one pixel extra around the edge of every tile, so tiles in the centre of the image will share two pixels on every edge with their neighbours. Setting overlap=2 means you'll have four pixel overlaps, confusingly.
Likewise, tile_size=128 means most of your tiles will be 132x132 pixels. It doesn't matter for PNG, but JPG works best with multiples of 8 on an axis. I would set tile_size to (128 - 2 * overlap), as deepzoom does by default.
git master libvips adds max, min and nearest (always pick the top-left pixel) as well. A branch has lanczos3, but it was never merged for various reasons.

How to calculate average pixel values for multiple bands that fall within polygons

I have a hyperspectral raster image of an agricultural field with 270 spectral bands. I created a polygon shapefile that delineates which pixels belong to each treatment. There are 250 individual polygons that each correspond to a replication of each treatment. I want to find the average pixel value for each band for all of the pixels that fall within each polygon.
Image of raw hyperspectral data
Image of polygons delineating treatments
I tried using the zonal statistics tool in both ArcGIS and QGIS but both tools can only run statistics on one band at a time. Doing this 270 times seems a little excessive.
I also tried to use the split raster tool in ArcGIS to divide the raster into 250 individual rasters corresponding to each polygon. Once I split the raster, I tried using the band collection statistics tool but found that I could not input all rasters simultaneously although the tool appears to be capable of doing so. Each attempt results in the following error: ERROR 000964 Specified extent is invalid.
I've been conducting my analyses in ArcGIS Pro, QGIS (v.3.4.11), and Python (v.3.7) primarily using GDAL. So, I am open to using any of these options to conduct further analysis. I think this might be doable in Python, but my coding skills aren't great and I'm not sure where to start.
You can use Spectral Python package to read your hyperspectral image.
Lets say,
from spectral import*
img=open_image('<file location>') #read into img
make a numpy mask image from the polygon shapefile you created, probably according to the classes(color coding).From this image, you can extract the pixel co-ordinates.
import numpy as np
from matplotlib.image import imread
mask_img=imread('<polygon mask image location>')
lets say for one polygon you have assigned class 1,so for all the pixel points in class1 polygon
x,y=np.where(mask_img==1) #get the required co-ordinates
use the same on the hyperspectral image
band_avg=[]
for band in np.arange(270):
sum,avg=0.0,0.0
for i in np.arange(len(x)): #no. of pixels in polygon class1
sum+= img[x[i],y[i]]
avg=sum/len(x)
band_avg.append(avg)
print(band_avg)
band_avg will return average pixel value for each band for all of the pixels in a particular class/polygon color. You can repeat the above for other classes/polygons by getting x,y for different class ID's.

python imshow pixel size varies within plot

Dear stackoverflow community!
I need to plot a 2D-map in python using imshow. The command used is
plt.imshow(ux_map, interpolation='none', origin='lower', extent=[lonhg_all.min(), lonhg_all.max(), lathg_all.min(), lathg_all.max()])
The image is then saved as follows
plt.savefig('rdv_cr%s_cmlon%s_ux.png' % (2097, cmlon_ref))
and looks like
The problem is that when zooming into the plot one can notice that the pixels have different shapes (e.g. different width). This is illustrated in the zoomed part below (taken from the top region of the the first image):
Is there any reason for this behaviour? I input a rectangular grid for my data, but the problem does not have to do with the data itself, I suppose. Instead it is probably something related to rendering. I'd expect all pixels to be of equal shape, but as could be seen they have both different widths as well as heights. By the way, this also occurs in the interactive plot of matplotlib. However, when zooming in there, they become equally shaped all of a sudden.
I'm not sure as to whether
https://github.com/matplotlib/matplotlib/issues/3057/ and the link therein might be related, but I can try playing around with dpi values. In any case, if anybody knows why this happens, could that person provide some background on why the computer cannot display the plot as intended using the commands from above?
Thanks for your responses!
This is related to the way the image is mapped to the screen. To determine the color of a pixel in the screen, the corresponding color is sampled from the image. If the screen area and the image size do not match, either upsampling (image too small) or downsampling (image too large) occurs.
You observed a case of upsampling. For example, consider drawing a 4x4 image on a region of 6x6 pixels on the screen. Sometimes two screen pixels fall into an image pixel, and sometimes only one. Here, we observe an extreme case of differently sized pixels.
When you zoom in in the interactive view, this effect seems to disapear. That is because suddenly you map the image to a large number of pixels. If one image pixel is enlarged to, say, 10 screen pixels and another to 11, you hardly notice the difference. The effect is most apparent when the image nearly matches the screen resolution.
A solution to work around this effect is to use interpolation, which may lead to an undesirable blurred look. To reduce the blur you can...
play with different interpolation functions. Try for example 'kaiser'
or up-scale the image by a constant factor using nearest neighbor interpolation (e.g. replace each pixel in the image by a block of pixels with the same color). Then any blurring will only affect the edges of the block.

Categories

Resources