Sample raster cell value at coordinates? - python

I have many rasters in ESRI ASCII format (http://resources.arcgis.com/en/help/main/10.1/index.html#/Esri_ASCII_raster_format/009t0000000z000000/).
I need to extract cell values at given locations / coordinates. Can anyone suggest a python package to achieve this? I suspect there may be something in gdal tools but I have been unable to find anything so far.
I am looking for similar functionality to GMT grdtrack, with which you can pass a table of coordinates and retrieve the cell value.
http://gmt.soest.hawaii.edu/doc/5.1.0/grdtrack.html
However I was hoping / wondering if this is possible in python as my previous and later stages of my analysis are all in python.

Of course, you can do this with GDAL. Provided that you have the coordinate where you want to sample your raster in the same projection you can do something like this:
from osgeo import gdal
def world2Pixel(gt, x, y):
ulX = gt[0]
ulY = gt[3]
xDist = gt[1]
yDist = gt[5]
rtnX = gt[2]
rtnY = gt[4]
pixel = int((x - ulX) / xDist)
line = int((ulY - y) / yDist)
return (pixel, line)
dataset = gdal.Open(filename)
gt = dataset.GetGeoTransform()
pixel, line = world2Pixel(gt, x, y)
band = dataset.GetRasterBand(1)
value = band.ReadAsArray(pixel, line, 1, 1)[0, 0]

Related

get elevation from lat/long of geotiff data in gdal

I have a mosaic tif file (gdalinfo below) I made (with some additional info on the tiles here) and have looked extensively for a function that simply returns the elevation (the z value of this mosaic) for a given lat/long. The functions I've seen want me to input the coordinates in the coordinates of the mosaic, but I want to use lat/long, is there something about GetGeoTransform() that I'm missing to achieve this?
This example for instance here shown below:
from osgeo import gdal
import affine
import numpy as np
def retrieve_pixel_value(geo_coord, data_source):
"""Return floating-point value that corresponds to given point."""
x, y = geo_coord[0], geo_coord[1]
forward_transform = \
affine.Affine.from_gdal(*data_source.GetGeoTransform())
reverse_transform = ~forward_transform
px, py = reverse_transform * (x, y)
px, py = int(px + 0.5), int(py + 0.5)
pixel_coord = px, py
data_array = np.array(data_source.GetRasterBand(1).ReadAsArray())
return data_array[pixel_coord[0]][pixel_coord[1]]
This gives me an out of bounds error as it's likely expecting x/y coordinates (e.g. retrieve_pixel_value([153.023499,-27.468968],dataset). I've also tried the following from here:
import rasterio
dat = rasterio.open(fname)
z = dat.read()[0]
def getval(lon, lat):
idx = dat.index(lon, lat, precision=1E-6)
return dat.xy(*idx), z[idx]
Is there a simple adjustment I can make so my function can query the mosaic in lat/long coords?
Much appreciated.
Driver: GTiff/GeoTIFF
Files: mosaic.tif
Size is 25000, 29460
Coordinate System is:
PROJCRS["GDA94 / MGA zone 56",
BASEGEOGCRS["GDA94",
DATUM["Geocentric Datum of Australia 1994",
ELLIPSOID["GRS 1980",6378137,298.257222101004,
LENGTHUNIT["metre",1]],
ID["EPSG",6283]],
PRIMEM["Greenwich",0,
ANGLEUNIT["degree",0.0174532925199433,
ID["EPSG",9122]]]],
CONVERSION["UTM zone 56S",
METHOD["Transverse Mercator",
ID["EPSG",9807]],
PARAMETER["Latitude of natural origin",0,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8801]],
PARAMETER["Longitude of natural origin",153,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8802]],
PARAMETER["Scale factor at natural origin",0.9996,
SCALEUNIT["unity",1],
ID["EPSG",8805]],
PARAMETER["False easting",500000,
LENGTHUNIT["metre",1],
ID["EPSG",8806]],
PARAMETER["False northing",10000000,
LENGTHUNIT["metre",1],
ID["EPSG",8807]],
ID["EPSG",17056]],
CS[Cartesian,2],
AXIS["easting",east,
ORDER[1],
LENGTHUNIT["metre",1,
ID["EPSG",9001]]],
AXIS["northing",north,
ORDER[2],
LENGTHUNIT["metre",1,
ID["EPSG",9001]]]]
Data axis to CRS axis mapping: 1,2
Origin = (491000.000000000000000,6977000.000000000000000)
Pixel Size = (1.000000000000000,-1.000000000000000)
Metadata:
AREA_OR_POINT=Area
Image Structure Metadata:
INTERLEAVE=BAND
Corner Coordinates:
Upper Left ( 491000.000, 6977000.000) (152d54'32.48"E, 27d19'48.33"S)
Lower Left ( 491000.000, 6947540.000) (152d54'31.69"E, 27d35'45.80"S)
Upper Right ( 516000.000, 6977000.000) (153d 9'42.27"E, 27d19'48.10"S)
Lower Right ( 516000.000, 6947540.000) (153d 9'43.66"E, 27d35'45.57"S)
Center ( 503500.000, 6962270.000) (153d 2' 7.52"E, 27d27'47.16"S)
Band 1 Block=25000x1 Type=Float32, ColorInterp=Gray
NoData Value=-999
Update 1 - I tried the following:
tif = r"mosaic.tif"
dataset = rio.open(tif)
d = dataset.read()[0]
def get_xy_coords(latlng):
transformer = Transformer.from_crs("epsg:4326", dataset.crs)
coords = [transformer.transform(x, y) for x,y in latlng][0]
#idx = dataset.index(coords[1], coords[0])
return coords #.xy(*idx), z[idx]
longx,laty = 153.023499,-27.468968
coords = get_elevation([(laty,longx)])
print(coords[0],coords[1])
print(dataset.width,dataset.height)
(502321.11181384244, 6961618.891167777)
25000 29460
So something is still not right. Maybe I need to subtract the coordinates from the bottom left/right of image e.g.
coords[0]-dataset.bounds.left,coords[1]-dataset.bounds.bottom
where
In [78]: dataset.bounds
Out[78]: BoundingBox(left=491000.0, bottom=6947540.0, right=516000.0, top=6977000.0)
Update 2 - Indeed, subtracting the corners of my box seems to get closer.. though I'm sure there is a much nice way just using the tif metadata to get what I want.
longx,laty = 152.94646, -27.463175
coords = get_xy_coords([(laty,longx)])
elevation = d[int(coords[1]-dataset.bounds.bottom),int(coords[0]-dataset.bounds.left)]
fig,ax = plt.subplots(figsize=(12,12))
ax.imshow(d,vmin=0,vmax=400,cmap='terrain',extent=[dataset.bounds.left,dataset.bounds.right,dataset.bounds.bottom,dataset.bounds.top])
ax.plot(coords[0],coords[1],'ko')
plt.show()
You basically have two distinct steps:
Convert lon/lat coordinates to map coordinates, this is only necessary if your input raster is not already in lon/lat. Map coordinates are the coordinates in the projection that the raster itself uses
Convert the map coordinates to pixel coordinates.
There are all kinds of tool you might use, perhaps to make things simpler (like pyproj, rasterio etc). But for such a simple case it's probably nice to start with doing it all in GDAL, that probably also enhances your understanding of what steps are needed.
Inputs
from osgeo import gdal, osr
raster_file = r'D:\somefile.tif'
lon = 153.023499
lat = -27.468968
lon/lat to map coordinates
# fetch metadata required for transformation
ds = gdal.OpenEx(raster_file)
raster_proj = ds.GetProjection()
gt = ds.GetGeoTransform()
ds = None # close file, could also keep it open till after reading
# coordinate transformation (lon/lat to map)
# define source projection
# this definition ensures the order is always lon/lat compared
# to EPSG:4326 for which it depends on the GDAL version (2 vs 3)
source_srs = osr.SpatialReference()
source_srs.ImportFromWkt(osr.GetUserInputAsWKT("urn:ogc:def:crs:OGC:1.3:CRS84"))
# define target projection based on the file
target_srs = osr.SpatialReference()
target_srs.ImportFromWkt(raster_proj)
# convert
ct = osr.CoordinateTransformation(source_srs, target_srs)
mapx, mapy, *_ = ct.TransformPoint(lon, lat)
You could verify this intermediate result by for example adding it as Point WKT in something like QGIS (using the QuickWKT plugin, making sure the viewer has the same projection as the raster).
map coordinates to pixel
# apply affine transformation to get pixel coordinates
gt_inv = gdal.InvGeoTransform(gt) # invert for map -> pixel
px, py = gdal.ApplyGeoTransform(gt_inv, mapx, mapy)
# it wil return fractional pixel coordinates, so convert to int
# before using them to read. Round to nearest with +0.5
py = int(py + 0.5)
px = int(px + 0.5)
# read pixel data
ds = gdal.OpenEx(raster_file) # open file again
elevation_value = ds.ReadAsArray(px, py, 1, 1)
ds = None
The elevation_value variable should be the value you're after. I would definitelly verify the result independently, try a few points in QGIS or the gdallocationinfo utility:
gdallocationinfo -l_srs "urn:ogc:def:crs:OGC:1.3:CRS84" filename.tif 153.023499 -27.468968
# Report:
# Location: (4228P,4840L)
# Band 1:
# Value: 1804.51879882812
If you're reading a lot of points, there will be some threshold at which it would be faster to read a large chunk and extract the values from that array, compared to reading every point individually.
edit:
For applying the same workflow on multiple points at once a few things change.
So for example having the inputs:
lats = np.array([-27.468968, -27.468968, -27.468968])
lons = np.array([153.023499, 153.023499, 153.023499])
The coordinate transformation needs to use ct.TransformPoints instead of ct.TransformPoint which also requires the coordinates to be stacked in a single array of shape [n_points, 2]:
coords = np.stack([lons.ravel(), lats.ravel()], axis=1)
mapx, mapy, *_ = np.asarray(ct.TransformPoints(coords)).T
# reshape in case of non-1D inputs
mapx = mapx.reshape(lons.shape)
mapy = mapy.reshape(lons.shape)
Converting from map to pixel coordinates changes because the GDAL method for this only takes single point. But manually doing this on the arrays would be:
px = gt_inv[0] + mapx * gt_inv[1] + mapy * gt_inv[2]
py = gt_inv[3] + mapx * gt_inv[4] + mapy * gt_inv[5]
And rounding the arrays to integer changes to:
px = (px + 0.5).astype(np.int32)
py = (py + 0.5).astype(np.int32)
If the raster (easily) fits in memory, reading all points would become:
ds = gdal.OpenEx(raster_file)
all_elevation_data = ds.ReadAsArray()
ds = None
elevation_values = all_elevation_data[py, px]
That last step could be optimized by checking highest/lowest pixel coordinates in both dimensions and only read that subset for example, but it would require normalizing the coordinates again to be valid for that subset.
The py and px arrays might also need to be clipped (eg np.clip) if the input coordinates fall outside the raster. In that case the pixel coordinates will be < 0 or >= xsize/ysize.

Do anyone know how to extract Image coordinate from Marmot dataset?

Marmot is a document image dataset (http://www.icst.pku.edu.cn/cpdp/data/marmot_data.htm) where labelling several things such as document body, image area, table area, table caption and so on. This dataset specially use for document image analysis research purpose. They mentioned all coordinates in 16 digit hexa decimal with little endian format. Is there anyone how worked with this dataset and how to convert that 16 digit XY coordinate to human understandable format?
Finally I got the clue after analysis and posting here if anyone need to investigate this dataset. However, they mentioned the unit value in which way they convert the given coordinate into pixel value but it was difficult to trace out because they did not mentioned it in their manual/guideline. They mentioned another place as an annotation.
First you have to convert their 16 character hexadecimal value using IEEE 754 little endian format. For example, a given coordinates for a label is,
BBox=['4074145c00000005', '4074dd95999999a9', '4080921e74bc6a80', '406fb9999999999a']
Convert using python,
conv_pound = struct.unpack('!d', str(t).decode('hex'))[0]) for t in BBox]
You will get value in "pound" unit which is 1/72 inch. We usually use coordinate in pixel unit and we know 1 inch is 96 pixel. So,
conv_pound = [321.2724609375003, 333.8490234375009, 530.2648710937501, 253.8]
Then, divided each value by 72 and multiply with 96 to finally get corresponding pixel value which is,
in_pixel = [428.36328, 445.13203, 707.01983, 338.40000]
They started to count pixel position from bottom-left corner of the document image. If you consider from top-left corner (usually we consider in this way), you have to subtract 2nd and 4th value from image height. If we consider image [height, width] is [1123, 793] then we can represent the above coordinates in integer value as,
label_boundary = [428, 678, 707, 785]
After staring at the xmls for an hour, I've found the last missing piece in the answer by #MMReza:
You don't need to rely on the units of measure in (step number 3). There is an attribute called "CropBox" of the root element "Page". Use that one to scale the coordinates.
I have something along the following lines (also inverse y axis here):
px0, py1, px1, py0 = list(map(hex_to_double, page.get("CropBox").split()))
pw = abs(px1 - px0)
ph = abs(py1 - py0)
for table in page.findall(".//Composite[#Label='TableBody']"):
x0p, y1m, x1p, y0m = list(map(hex_to_double, table.get("BBox").split()))
x0 = round(imgw*(x0p - px0)/pw)
x1 = round(imgw*(x1p - px0)/pw)
y0 = round(imgh*(py1 - y0m)/ph)
y1 = round(imgh*(py1 - y1m)/ph)
In case anyone is trying to do this in Python 3 like I did, you only have to change step 2 of the other answer like this :
conv_pound = [struct.unpack('!d', bytes.fromhex(t))[0] for t in BBox]
I wanted to convert the coordinates as well as wanted to verify that my conversion actually worked. So, I made this script to read label file and respective image file then extract coordinates of table body(for eg) and visualize them on the images. It can be used to extract other fields in the similar manner. Comments explain it all
import glob
import struct
import cv2
import binascii
import re
xml_files = glob.glob("path_to_labeled_files/*.xml")
for i in xml_files:
# Open the current file and read everything
cur_file = open(i,"r")
content = cur_file.read()
# Find index of all occurrences of only needed portions (eg TableBody this case)
residxs = [l.start() for l in re.finditer('Label="TableBody"', content)]
# Read the image
img = cv2.imread("path_to_images_folder/"+i.split('/')[-1][:-3]+"jpg")
# Traverse over all occurences
for r in residxs[:-1]:
# List to store output points
coords = []
# Start index of an occurence
sidx = r
# Substring from whole file content
substr = content[sidx:sidx+400]
# Now find start index and end index of coordinates in this substring
sidx = substr.find('BBox="')
eidx = substr.find('" CLIDs')
# String containing only points
points = substr[sidx+6:eidx]
# Make the conversion (also take care of little and big endian in unpack)
bins = ''
for j in points.split(' '):
if(j == ''):
continue
coords.append(struct.unpack('>d', binascii.unhexlify(j))[0])
if len(coords) != 4:
continue
# As suggested by MMReza
for k in range(4):
coords[k] = (coords[k]/72)*96
coords[1] = img.shape[0] - coords[1]
coords[3] = img.shape[0] - coords[3]
# Print the extracted coordinates
print(coords)
# Visualize it on the image
cv2.rectangle(img, (int(coords[0]),int(coords[1])) , (int(coords[2]),int(coords[3])), (255, 0, 0), 2)
cv2.imshow("frame",img)
cv2.waitKey(0)

Find indices of raster cells that intersect with a polygon

I want to get a list of indices (row,col) for all raster cells that fall within or are intersected by a polygon feature. Looking for a solution in python, ideally with gdal/ogr modules.
Other posts have suggested rasterizing the polygon, but I would rather have direct access to the cell indices if possible.
Since you don't provide a working example, it's bit unclear what your starting point is. I made a dataset with 1 polygon, if you have a dataset with multiple but only want to target a specific polygon you can add SQLStatement or where to the gdal.Rasterize call.
Sample polygon
geojson = """{"type":"FeatureCollection",
"name":"test",
"crs":{"type":"name","properties":{"name":"urn:ogc:def:crs:OGC:1.3:CRS84"}},
"features":[
{"type":"Feature","properties":{},"geometry":{"type":"MultiPolygon","coordinates":[[[[-110.254,44.915],[-114.176,37.644],[-105.729,36.41],[-105.05,43.318],[-110.254,44.915]]]]}}
]}"""
Rasterizing
Rasterizing can be done with gdal.Rasterize. You need to specify the properties of the target grid. If there is no predefined grid these could be extracted from the polygon itself
ds = gdal.Rasterize('/vsimem/tmpfile', geojson, xRes=1, yRes=-1, allTouched=True,
outputBounds=[-120, 30, -100, 50], burnValues=1,
outputType=gdal.GDT_Byte)
mask = ds.ReadAsArray()
ds = None
gdal.Unlink('/vsimem/tmpfile')
Converting to indices
Retrieving the indices from the rasterized polygon can be done with Numpy:
y_ind, x_ind = np.where(mask==1)
Clearly Rutger's solution above is the way to go with this, however I will leave my solution up. I developed a script that accomplished what I needed with the following:
Get the bounding box for each vector feature I want to check
Use the bounding box to limit the computational window (determine what portion of the raster could potentially have intersections)
Iterate over the cells within this part of the raster and construct a polygon geometry for each cell
Use ogr.Geometry.Intersects() to check if the cell intersects with the polygon feature
Note that I have only defined the methods, but I think implementation should be pretty clear -- just call match_cells with the appropriate arguments (ogr.Geometry object and geotransform matrix). Code below:
from osgeo import ogr
# Convert projected coordinates to raster cell indices
def parse_coords(x,y,gt):
row,col = None,None
if x:
col = int((x - gt[0]) // gt[1])
# If only x coordinate is provided, return column index
if not y:
return col
if y:
row = int((y - gt[3]) // gt[5])
# If only x coordinate is provided, return column index
if not x:
return row
return (row,col)
# Construct polygon geometry from raster cell
def build_cell((row,col),gt):
xres,yres = gt[1],gt[5]
x_0,y_0 = gt[0],gt[3]
top = (yres*row) + y_0
bottom = (yres*(row+1)) + y_0
right = (xres*col) + x_0
left = (xres*(col+1)) + x_0
# Create ring topology
ring = ogr.Geometry(ogr.wkbLinearRing)
ring.AddPoint(left,bottom)
ring.AddPoint(right,bottom)
ring.AddPoint(right,top)
ring.AddPoint(left,top)
ring.AddPoint(left,bottom)
# Create polygon
box = ogr.Geometry(ogr.wkbPolygon)
box.AddGeometry(ring)
return box
# Iterate over feature geometries & check for intersection
def match_cells(inputGeometry,gt):
matched_cells = []
for f,feature in enumerate(inputGeometry):
geom = feature.GetGeometryRef()
bbox = geom.GetEnvelope()
xmin,xmax = [parse_coords(x,None,gt) for x in bbox[:2]]
ymin,ymax = [parse_coords(None,y,gt) for y in bbox[2:]]
for cell_row in range(ymax,ymin+1):
for cell_col in range(xmin,xmax+1):
cell_box = build_cell((cell_row,cell_col),gt)
if cell_box.Intersects(geom):
matched_cells += [[(cell_row,cell_col)]]
return matched_cells
if you want to do this manually you'll need to test each cell for:
Square v Polygon intersection and
Square v Line intersection.
If you treat each square as a 2d point this becomes easier - it's now a Point v Polygon problem. Check in Game Dev forums for collision algorithms.
Good luck!

convert utm coordinate to coordinate in reference to neighboring zone

I have tens of thousands of rasters in NAD83 UTM 13N. I'm trying to extract data by points using arcpy.GetCellValue_management(raster.tif, point), but the very western side of the data is in the UTM 12N zone. Is there a way to get a coordinate from 12N but referenced to 13N? A project requirement is that all data be in UTM 13N even though its a statewide project. I know its silly.
This can be accomplished with GDAL. Save your dataPoints.shp in the UTM grid you desire (i.e., UTM 13N), then use GDAL to load the points layer, get field, get geometry, get bounding coordinates, geotransform, raster band, point coordinates (in UTM 13N)and read the raster as array. Build a loop over all rasters, and it works very fast. Thanks to Luke for giving the details here.
from osgeo import gdal, ogr
shp_filename = 'C:\\Path\\dataPoints_UTM13.shp'
ds = ogr.Open(shp_filename)
lyr = ds.GetLayer()
for feat in lyr:
point_id_obj = feat.GetField("Sample")
name = feat.GetField("Location_D")
geom = feat.GetGeometryRef()
mx, my = geom.GetX(), geom.GetY()
path = 'C:\\RasterPath'
raster = 'myraster'
ras_open = gdal.Open('{a}\\{b}.tif'.format(a=path, b=raster))
gt = aws_open.GetGeoTransform()
rb = aws_open.GetRasterBand(1)
px = abs(int((mx - gt[0]) / gt[1]))
py = int((my - gt[3]) / gt[5])
ras_obj = rb.ReadAsArray(px, py, 1, 1)
print point_id_obj
print name
print mx, my

What is the difference between mat and matND?

I am trying to extract data from a binary mask. All goes well but changing to python will cause the data to shift a few pixels. It is enough so I cannot find the center. However saving the image will oldly enough display the pixels at the correct location
Here is my code. I basically create a normal mat to use as output. However a matnd is outputed according to the docs
Am I extracting the data properly? If so tell me. I am trying to find the center given points along the center. I kidda dont want my data to be shifted.
import cv2.cv as cv
def main():
imgColor = cv.LoadImage(OPTICIMAGE, cv.CV_LOAD_IMAGE_COLOR)
center, radius = centerandradus(imgColor)
def centerandradus(cvImg, ColorLower=None,ColorUpper=None):
lowerBound = cv.Scalar(130, 0, 130);
upperBound = cv.Scalar(171, 80, 171);
size = cv.GetSize(cvImg)
output = cv.CreateMat(size[0],size[1],cv.CV_8UC1)
cv.InRangeS(cvImg, lowerBound, upperBound,output)
mask = np.asarray( output[:,:] )
x,y = np.nonzero(mask)
x, y = np.array(x),np.array(y)
h,k = centerEstimate(x,y)
return np.array([h,k]), radius
def centerEstimate(xList,yList):
x_m = np.mean( np.r_[xList])
y_m = np.mean( np.r_[yList])
return x_m, y_m
Edit: I think it the problem with matND, since i notice the data is already shifted when I try to print out the data. If you need any more information please ask
Thank You for your time
It seems there is no more differences between Mat and MatND. MatND is now obsolete.
By looking at opencv2/core.hpp (version 2.4.8):
typedef Mat MatND;
I learn that the orientation of the data is different when I use findcontours or this matrix.
This matrix use height X width, while the contour put is as width X height. I hate reading apis.

Categories

Resources