Edit
Here is the proper way to do it, and the documentation:
import random
from osgeo import gdal, ogr
RASTERIZE_COLOR_FIELD = "__color__"
def rasterize(pixel_size=25):
# Open the data source
orig_data_source = ogr.Open("test.shp")
# Make a copy of the layer's data source because we'll need to
# modify its attributes table
source_ds = ogr.GetDriverByName("Memory").CopyDataSource(
orig_data_source, "")
source_layer = source_ds.GetLayer(0)
source_srs = source_layer.GetSpatialRef()
x_min, x_max, y_min, y_max = source_layer.GetExtent()
# Create a field in the source layer to hold the features colors
field_def = ogr.FieldDefn(RASTERIZE_COLOR_FIELD, ogr.OFTReal)
source_layer.CreateField(field_def)
source_layer_def = source_layer.GetLayerDefn()
field_index = source_layer_def.GetFieldIndex(RASTERIZE_COLOR_FIELD)
# Generate random values for the color field (it's here that the value
# of the attribute should be used, but you get the idea)
for feature in source_layer:
feature.SetField(field_index, random.randint(0, 255))
source_layer.SetFeature(feature)
# Create the destination data source
x_res = int((x_max - x_min) / pixel_size)
y_res = int((y_max - y_min) / pixel_size)
target_ds = gdal.GetDriverByName('GTiff').Create('test.tif', x_res,
y_res, 3, gdal.GDT_Byte)
target_ds.SetGeoTransform((
x_min, pixel_size, 0,
y_max, 0, -pixel_size,
))
if source_srs:
# Make the target raster have the same projection as the source
target_ds.SetProjection(source_srs.ExportToWkt())
else:
# Source has no projection (needs GDAL >= 1.7.0 to work)
target_ds.SetProjection('LOCAL_CS["arbitrary"]')
# Rasterize
err = gdal.RasterizeLayer(target_ds, (3, 2, 1), source_layer,
burn_values=(0, 0, 0),
options=["ATTRIBUTE=%s" % RASTERIZE_COLOR_FIELD])
if err != 0:
raise Exception("error rasterizing layer: %s" % err)
Original question
I'm looking for information on how to use osgeo.gdal.RasterizeLayer() (the docstring is very succinct, and I can't find it in the C or C++ API docs. I only found a doc for the java bindings).
I adapted a unit test and tried it on a .shp made of polygons:
import os
import sys
from osgeo import gdal, gdalconst, ogr, osr
def rasterize():
# Create a raster to rasterize into.
target_ds = gdal.GetDriverByName('GTiff').Create('test.tif', 1280, 1024, 3,
gdal.GDT_Byte)
# Create a layer to rasterize from.
cutline_ds = ogr.Open("data.shp")
# Run the algorithm.
err = gdal.RasterizeLayer(target_ds, [3,2,1], cutline_ds.GetLayer(0),
burn_values=[200,220,240])
if err != 0:
print("error:", err)
if __name__ == '__main__':
rasterize()
It runs fine, but all I obtain is a black .tif.
What's the burn_values parameter for ? Can RasterizeLayer() be used to rasterize a layer with features colored differently based on the value of an attribute ?
If it can't, what should I use ? Is AGG suitable for rendering geographic data (I want no antialiasing and a very robust renderer, able to draw very large and very small features correctly, possibly from "dirty data" (degenerate polygons, etc...), and sometimes specified in large coordinates) ?
Here, the polygons are differentiated by the value of an attribute (the colors don't matter, I just want to have a different one for each value of the attribute).
EDIT: I guess I'd use qGIS python bindings: http://www.qgis.org/wiki/Python_Bindings
That's the easiest way I can think of. I remember hand rolling something before, but it's ugly. qGIS would be easier, even if you had to make a separate Windows installation (to get python to work with it) then set up an XML-RPC server to run it in a separate python process.
I you can get GDAL to rasterize properly that's great too.
I haven't used gdal for a while, but here's my guess:
burn_values is for false color if you don't use Z-values. Everything inside your polygon is [255,0,0] (red) if you use burn=[1,2,3],burn_values=[255,0,0]. I'm not sure what happens to points - they might not plot.
Use gdal.RasterizeLayer(ds,bands,layer,burn_values, options = ["BURN_VALUE_FROM=Z"]) if you want to use the Z values.
I'm just pulling this from the tests you were looking at: http://svn.osgeo.org/gdal/trunk/autotest/alg/rasterize.py
Another approach - pull the polygon objects out, and draw them using shapely, which may not be attractive. Or look into geodjango (I think it uses openlayers to plot into browsers using JavaScript).
Also, do you need to rasterize? A pdf export might be better, if you really want precision.
Actually, I think I found using Matplotlib (after extracting and projecting the features) was easier than rasterization, and I could get a lot more control.
EDIT:
A lower level approach is here:
http://svn.osgeo.org/gdal/trunk/gdal/swig/python/samples/gdal2grd.py\
Finally, you can iterate over the polygons (after transforming them into a local projection), and plot them directly. But you better not have complex polygons, or you will have a bit of grief. If you have complex polygons ... you are probably best off using shapely and r-tree from http://trac.gispython.org/lab if you want to roll your own plotter.
Geodjango might be a good place to ask .. they will know a lot more than me. Do they have a mailing list? There's also lots of python mapping experts around, but none of them seem to worry about this. I guess they just plot it in qGIS or GRASS or something.
Seriously, I hope that somebody who knows what they are doing can reply.
Related
I am generating 3D meshes in PyVista, and I would like to update my integration test suite to ensure that it successfully shows my plots.
I'm hoping to adapt the methodology described here, to work with PyVista. Unfortunately, I can't find any results for any equivalent function to plt.gcf() in PyVista.
Does anyone know of a workaround?
There's a few ways of doing this. First, pyvista returns a instance of pyvista.plotting.renderer.CameraPosition upon a successful plot. For example:
>>> import pyvista
>>> sphere = pyvista.Sphere()
>>> cpos = sphere.plot(off_screen=True)
>>> print(type(cpos))
<class 'pyvista.plotting.renderer.CameraPosition'>
Since it's necessary to setup a plot and renderer to properly display a plot, getting a return camera position means that your plot was successful.
Alternatively, you can save the screenshot and check that the file exists:
import os
import pyvista
sphere = pyvista.Sphere()
cpos = sphere.plot(off_screen=True, screenshot='tmp.png')
assert os.path.isfile('tmp.png')
You could also check the content of the saved image as well (or potentially file size)
I am working on converting a small project I wrote to find overlapping boundaries of a shape file within a radius of a certain point. This original project was a mock up project I wrote using Shapely and GeoPandas, to make this more suitable for production, I am converting it all to GeoDjango.
There is one thing that is vital to this program, which is to create an equidistant projection of a circle on a map. I was able to do this with shapely objects using pyproj and functools.
Let it be known that this solution was found on stackoverflow and is not my original solution.
from shapely import geometry
from functools import partial
def createGeoCircle(lat, lng, mi):
proj_wgs84 = pyproj.Proj(init='epsg:4326')
aeqd_proj = '+proj=aeqd +lat_0={lat} +lon_0={lng} +x_0=0 +y_0=0'
project = partial(
pyproj.transform,
pyproj.Proj(aeqd_proj.format(lat=lat, lng=lng)),
proj_wgs84)
buf = geometry.Point(0, 0).buffer(mi * 1.60934 * 1000)
circle = transform(project, buf)
return circle
I attempted to again use this solution and create a geoDjango MultiPolygon object from the shapely object, but it results in incorrect placement and shapes.
Here is the code I use to cast the shapely object coming from the above function.
shape_model(geometry=geos.MultiPolygon(geos.GEOSGeometry(createGeoCircle(41.378397, -81.2446768, 1).wkt)), state="CircleTest").save()
Here is the output in Django Admin. this picture is zoomed in to show the shape, but the location is in the middle of Antarctica. The coordinates given were meant to show in Ohio.
To clear a few things up, my model is as follows:
class shape_model(geo_models.Model):
state = geo_models.CharField('State Territory ID', max_length=80)
aFactor = geo_models.FloatField()
bFactor = geo_models.FloatField()
geometry = geo_models.MultiPolygonField(srid=4326)
I can get the location correct by simply using a geodjango point and buffer, but it shows up as an oval as it is not equidistant. If anyone has any suggestions or hints, I would be very appreciative to hear them!
Okay, I have found a solution to this problem. I used the shapely equidistant projection code and expanded it to convert it back to EPSG:4326. The updated function is as follows:
def createGeoCircle(lat, lng, mi):
point = geometry.Point(lat, lng)
local_azimuthal_projection = f"+proj=aeqd +lat_0={lat} +lon_0={lng} +x_0=0 +y_0=0"
proj_wgs84 = pyproj.Proj('epsg:4326')
wgs84_to_aeqd = partial(
pyproj.transform,
proj_wgs84,
pyproj.Proj(local_azimuthal_projection),
)
aeqd_to_wgs84 = partial(
pyproj.transform,
pyproj.Proj(local_azimuthal_projection),
proj_wgs84,
)
point_transformed = transform(wgs84_to_aeqd, point)
buffer = point_transformed.buffer(mi * 1.60934 * 1000)
buffer_wgs84 = transform(aeqd_to_wgs84, buffer)
return json.dumps(geometry.mapping(buffer_wgs84))
I also dump the geometry mapping from this function so it can now be loaded directly into the geos MultiPolygon rather than using the wkt of the object. I load the circle into a model and save it using the following:
shape_model(geometry=geos.MultiPolygon(geos.GEOSGeometry(createGeoCircle(41.378397, -81.2446768, 1))), state="CircleTest", aFactor=1.0, bFactor=1.0).save()
FYI this is not a native geodjango solution and relies on many other packages. If someone has a native solution, I would greatly prefer that!
I am trying to plot two surfaces which touch at exactly two points but are otherwise well separated. Depending on the viewing angle, this renders either just fine (figure 1) or it makes some mess with the top surface s2 (plasma, red) obstructing the lower one s1 (figure 2). I suppose that is due to the order in which the surfaces are plotted, so mayavi just puts one in front even though mathematically it should be in the back. How can I solve this issue? Note that I would like to have different colormaps for both surfaces, as they represent different things. Thanks a lot!
figure 1, correct plot
figure 2, wrong plot
Here the code to produce the plot. Viewing angles were chosen in the interactive window, not sure how to get the numerical values.
import numpy as np
import mayavi.mlab
x,y = np.mgrid[-np.pi:np.pi:0.01, -np.pi:np.pi:0.01]
def surface1(x,y):
return -np.sqrt((np.cos(x) + np.cos(y) - 1)**2 + np.sin(x)**2)
def surface2(x,y):
return np.sqrt((np.cos(x) + np.cos(y) - 1)**2 + np.sin(x)**2)
s1 = mayavi.mlab.surf(x,y,surface1, colormap='viridis')
s2 = mayavi.mlab.surf(x,y,surface2, colormap='plasma')
mayavi.mlab.show()
EDIT:
Finally found the issue: Need to specify the correct backend for rendering. Using ipython3 --gui=qt solves the issue. Thus the issue only appears when using the default backend (whichever that is). I wish this would be documented more clearly somewhere, would have saved me a lot of work.
Using Python to interface with Paraview, I want to get the "Points" data from an integrate variable filter.
I tried the GetArray("Points") but it can't find it even though you can clearly see it in the GUI if you go to spreadsheet view.
My code is below. With the GUI approach I get for Point ID = 0 the array "Points" has three values (0.54475, -1.27142e-18, 4.23808e-19) which makes sense because the default arrow is symmetric in y and z.
Is there any way to get the value 0.54475 inside python?
MWE
#Import Paraview Libraries
#import sys
#sys.path.append('Path\\To\\Paraview\\bin\\Lib\\site-packages')
from paraview.simple import *
#### disable automatic camera reset on 'Show'
paraview.simple._DisableFirstRenderCameraReset()
# create a new 'Arrow'
arrow1 = Arrow()
# create a new 'Integrate Variables'
integrateVariables1 = IntegrateVariables(Input=arrow1)
pdata = paraview.servermanager.Fetch(integrateVariables1).GetPointData()
print pdata.GetArray("Points") # prints None
You are very close. For all other arrays, you can access the value using the method you have written.
However VTK treats the point coordinates slightly differently, so the code you need for the point coordinates is:
arrow1 = Arrow()
integrateVariables1 = IntegrateVariables(Input=arrow1)
integrated_filter = paraview.servermanager.Fetch(integrateVariables1)
print integrated_filter.GetPoint(0)
This gives me: (0.5447500348091125, -1.2714243711743785e-18, 4.238081064918634e-19)
I would also suggest that you might want to do this in a Python Programmable Filter. Passing the filter from the server back to the client is not the best practice, and it is preferred to do all calculation on the server.
Here is basic script I use for drawing:
from graph_tool.all import *
g = load_graph("data.graphml")
g.set_directed(False)
pos = sfdp_layout(g)
graph_draw(g, pos=pos, output_size=(5000, 5000), vertex_text=g.vertex_index, vertex_fill_color=g.vertex_properties["color"], edge_text=g.edge_properties["name"], output="result.png")
Main problems here are ugly edge text and vertexes that are too close to parent. As I understand this happens because by default fit_view=True and result image scaled to fit size. When I set fit_view=False result image doesn't have graph (I see only little piece).
Maybe I need another output size for fit_view=False or some additional steps?
Today I ran into the same problem.
It seems that you can use fit_view=0.9, and by using a float number yo can scale the fit. In that case it would appear 90% than the normal size. If you use 1, will be the same size.
Hope it helps.