Google maps using python - python

I need to develop a tool (eg: calculate polygon area) and integrate it with Google Maps. I am not familiar with java. Can I do this using python? If yes, how can I go about integrating my code with Maps?

You can do it, using OpenStreetMap instead of Google map, in IPython/Jupyter Notebook, through ipyleaflet package.
Just write(or import) your script in Ipython Notebook(a python based env.) and then take a look at here;
https://github.com/ellisonbg/ipyleaflet/tree/master/examples
you will be able to draw whatever you want defining new Layer and so on...
Here an example:
Open your Ipython Notebook and import these modules;
from ipyleaflet import (
Map,
Marker,
TileLayer, ImageOverlay,
Polyline, Polygon, Rectangle, Circle, CircleMarker,
GeoJSON,
DrawControl
)
m = Map(zoom=0)
dc = DrawControl()
def handle_draw(self, action, geo_json):
print(action)
print(geo_json)
dc.on_draw(handle_draw)
m.add_control(dc)
m
The map will be appeared
Zoom by double clicking on the your interesting spot, then draw your polygon using "Draw a polygon" item.
This is just a suggestion, you can use other methods to calculate the polygon's area
import pyproj
import shapely
import shapely.ops as ops
from shapely.geometry.polygon import Polygon
from functools import partial
my_poly = dc.last_draw['geometry']['coordinates'][0]
geom = Polygon(my_poly)
geom_area = ops.transform(
partial(
pyproj.transform,
pyproj.Proj(init='EPSG:4326'),
pyproj.Proj(
proj='aea',
lat1=geom.bounds[1],
lat2=geom.bounds[3])),
geom)
print (geom_area.area, 'square meters, which is equal to',geom_area.area/1000000, 'square kilometers')
2320899322382.008 square meters, which is equal to 2320899.3223820077 square kilometers

Related

Convert from plotter coordinates to world coordinates in PyVista

I am new to PyVista and vtk. I am implementing a mesh editing tool (Python=3.10, pyvista=0.37,vtk=9.1 ) When a user clicks all points within a given radius of the mouse cursor's world coordinates (e.g. projected point on the surface) should be selected. I have implemented this much through callbacks to mouse clicks using the pyvista plotters track_click_position method.
My problem is that I also want for a user to be able to preview the selection (highlight the vertices that will be selected) before they click. For this it is necessary to track the mouse location in world coordinates and to attach a callback function to the movement of the mouse that will highlight the relevant nodes.
The pyvista plotter's 'track_mouse_position' method doesn't support attaching callbacks but I figured out a work around for that. In the minimal example below I have managed to track changes to the mouse cursor location in pixels in the plotter's coordinate system. I am stuck now as to how to convert these into world coordinates. When the mouse hovers over the sphere these 'world coordinates' this should be the projected location on the sphere. When the mouse hovers off the sphere then it should return nothing or inf or some other useless value.
import pyvista as pv
def myCallback(src,evt):
C = src.GetEventPosition() # appears to be in pixels of the viewer
print(C)
# how to convert C into world coordinates on the sphere
sp = pv.Sphere()
p = pv.Plotter()
p.add_mesh(sp)
p.iren.add_observer("MouseMoveEvent",myCallback)
p.show()
Thank you very much for your help.
Harry
I figured this one out. They key was to use 'pick_mouse_position' after calling 'track_mouse_position'.
import pyvista as pv
def myCallback(src,evt):
out = p.pick_mouse_position()
print(out)
sp = pv.Sphere()
p = pv.Plotter()
p.add_mesh(sp)
p.track_mouse_position()
p.iren.add_observer("MouseMoveEvent",myCallback)
p.show()

Can you do Azimuthal Equidistant projections natively in GeoDjango?

I am working on converting a small project I wrote to find overlapping boundaries of a shape file within a radius of a certain point. This original project was a mock up project I wrote using Shapely and GeoPandas, to make this more suitable for production, I am converting it all to GeoDjango.
There is one thing that is vital to this program, which is to create an equidistant projection of a circle on a map. I was able to do this with shapely objects using pyproj and functools.
Let it be known that this solution was found on stackoverflow and is not my original solution.
from shapely import geometry
from functools import partial
def createGeoCircle(lat, lng, mi):
proj_wgs84 = pyproj.Proj(init='epsg:4326')
aeqd_proj = '+proj=aeqd +lat_0={lat} +lon_0={lng} +x_0=0 +y_0=0'
project = partial(
pyproj.transform,
pyproj.Proj(aeqd_proj.format(lat=lat, lng=lng)),
proj_wgs84)
buf = geometry.Point(0, 0).buffer(mi * 1.60934 * 1000)
circle = transform(project, buf)
return circle
I attempted to again use this solution and create a geoDjango MultiPolygon object from the shapely object, but it results in incorrect placement and shapes.
Here is the code I use to cast the shapely object coming from the above function.
shape_model(geometry=geos.MultiPolygon(geos.GEOSGeometry(createGeoCircle(41.378397, -81.2446768, 1).wkt)), state="CircleTest").save()
Here is the output in Django Admin. this picture is zoomed in to show the shape, but the location is in the middle of Antarctica. The coordinates given were meant to show in Ohio.
To clear a few things up, my model is as follows:
class shape_model(geo_models.Model):
state = geo_models.CharField('State Territory ID', max_length=80)
aFactor = geo_models.FloatField()
bFactor = geo_models.FloatField()
geometry = geo_models.MultiPolygonField(srid=4326)
I can get the location correct by simply using a geodjango point and buffer, but it shows up as an oval as it is not equidistant. If anyone has any suggestions or hints, I would be very appreciative to hear them!
Okay, I have found a solution to this problem. I used the shapely equidistant projection code and expanded it to convert it back to EPSG:4326. The updated function is as follows:
def createGeoCircle(lat, lng, mi):
point = geometry.Point(lat, lng)
local_azimuthal_projection = f"+proj=aeqd +lat_0={lat} +lon_0={lng} +x_0=0 +y_0=0"
proj_wgs84 = pyproj.Proj('epsg:4326')
wgs84_to_aeqd = partial(
pyproj.transform,
proj_wgs84,
pyproj.Proj(local_azimuthal_projection),
)
aeqd_to_wgs84 = partial(
pyproj.transform,
pyproj.Proj(local_azimuthal_projection),
proj_wgs84,
)
point_transformed = transform(wgs84_to_aeqd, point)
buffer = point_transformed.buffer(mi * 1.60934 * 1000)
buffer_wgs84 = transform(aeqd_to_wgs84, buffer)
return json.dumps(geometry.mapping(buffer_wgs84))
I also dump the geometry mapping from this function so it can now be loaded directly into the geos MultiPolygon rather than using the wkt of the object. I load the circle into a model and save it using the following:
shape_model(geometry=geos.MultiPolygon(geos.GEOSGeometry(createGeoCircle(41.378397, -81.2446768, 1))), state="CircleTest", aFactor=1.0, bFactor=1.0).save()
FYI this is not a native geodjango solution and relies on many other packages. If someone has a native solution, I would greatly prefer that!

datashader xarray.Image to holoviews Points

This is the code:
import datashader as ds
import pandas as pd
from colorcet import fire
from datashader import transfer_functions as tf
from datashader.utils import lnglat_to_meters
import holoviews as hv
import geoviews as gv
from holoviews.operation.datashader import datashade, spread, aggregate
hv.extension('bokeh')
df = pd.read_csv('...')
agg = ds.Canvas().points(df, 'x', 'y', agg=ds.count())
img = tf.shade(agg.where(agg['x']>0), cmap=fire)
url = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'
tile_opts = dict(width=1000,height=600,xaxis=None,yaxis=None,show_grid=False,bgcolor='black')
map_tiles = gv.WMTS(url).opts(style=dict(alpha=1.0), plot=tile_opts)
points = hv.Points(df, ['x', 'y'])
#points = img # <-- Using this does not work
ds_points = spread(datashade(points, width=1000, height=600, cmap=fire), px=2)
map_tiles * ds_points
The above code creates a holoviews Points object based on data from a pandas dataframe and uses spread() and datashade() functions in holoviews to plot the points on a map. However, I want to do some transformations on the data before I plot it on the map. I tried to use the functionality already available in datashader, but I'm unable to figure out how I can convert the xarray.Image object created by datashader into a holoviews Point object which can be plotted on top of the map tiles.
EDIT
I'm not able to format code properly in comments, so I'll just put it here.
I tried doing the following as a degenerate case:
from custom_operation import CustomOperation
points = hv.Points(df, ['x', 'y'])
CustomOperation(rasterize(points))
where CustomOperation is defined as:
from holoviews.operation import Operation
class CustomOperation(Operation):
def _process(self, element, key=None):
return element
This produces the following error:
AttributeError: 'Image' object has no attribute 'get'
The Image object created by Datashader is a regular grid/array of values that aggregates the original points by bin, so it is no longer possible to recover the original points. It would not be meaningful to use a HoloViews Points object on this already 2D-histogrammed data; a Points object expects a set of individual points, not a 2D array. Instead, you can use a HoloViews Image object, which accepts a 2D array like that generated by Datashader. The syntax would be something like hv.Image(img), though I can't test it with the above code because it's not runnable without the CSV file.
Note that if you take this approach, what will happen is that Datashader will render the points into a fixed-size grid, and HoloViews will then overlay that specific grid of values onto the map. Even if you zoom in or pan, you'll still see that same grid; it will never update to show a subset of the data at a higher resolution as your current code will, because the Datashader computations will have all completed and given you a fixed array before you start plotting anything with HoloViews or Bokeh. If you want dynamic zoom and updating, don't use the Datashader API (Canvas, .points, tf.shade, etc.) separately; you'll need to use either the HoloViews operations you are already using (datashade,spread, rasterize, etc.) or define a custom HoloViews operation to encapsulate the processing you want to do (which can include manually calling the Datashader API if you need to) and allow the processing to be dynamically applied each time the user pans or zooms.

Geodesic buffering in python

Given land polygons as a Shapely MultiPolygon, I want to find the (Multi-)Polygon that represents the e.g. 12 nautical mile buffer around the coastlines.
Using the Shapely buffer method does not work since it uses euclidean calculations.
Can somebody tell me how to calculate geodesic buffers in python?
This is not a shapely problem, since shapely explicitly tells in its documentation that the library is for planar computation only. Nevertheless, in order to answer your question, you should specify the coordinate systems you are using for your multipolygons.
Assuming you are using WGS84 projection (lat,lon), this is a recipe I found in another SO question (fix-up-shapely-polygon-object-when-discontinuous-after-map-projection). You will need pyproj library.
import pyproj
from shapely.geometry import MultiPolygon, Polygon
from shapely.ops import transform as sh_transform
from functools import partial
wgs84_globe = pyproj.Proj(proj='latlong', ellps='WGS84')
def pol_buff_on_globe(pol, radius):
_lon, _lat = pol.centroid.coords[0]
aeqd = pyproj.Proj(proj='aeqd', ellps='WGS84', datum='WGS84',
lat_0=_lat, lon_0=_lon)
project_pol = sh_transform(partial(pyproj.transform, wgs84_globe, aeqd), pol)
return sh_transform( partial(pyproj.transform, aeqd, wgs84_globe),
project_pol.buffer(radius))
def multipol_buff_on_globe(multipol, radius):
return MultiPolygon([pol_buff_on_globe(g, radius) for g in multipol])
pol_buff_on_globe function does the following. First, build an azimuthal equidistant projection centered in the polygon centroid. Then, change the coordinate system of the polygon to that projection. After that, builds the buffer there, and then change the coordinate system of the buffered polygon to WGS84 coordinate system.
Some special care is needed:
You will need to find out how to translate the distance you want to the distance used in aeqd projection.
Be careful of not buffering including the poles (see the mentioned SO question).
The fact that we are using the centroid of the polygon to center the projection should guaranty the answer is good enough, but if you have specif precision requirements you should NOT USE this solution, or at least make a characterization of the error for the typical polygon you are using.

What algorithm does mayavi.mlab.pipeline.iso_surface.IsoSurface use?

I've been sifting through the Mayavi documentation and Google but I can't find any statement about what algorithm the IsoSurface class uses. If it helps, my source data comes from a 3D NumPy array passed to the mayavi.mlab.pipeline.scalar_field function. Here's the code for using the iso_surface function on an image containing a 3D cube:
import numpy as np
from mayavi import mlab
img = np.pad(np.ones((5,5,5)), 1, mode='constant')
src = mlab.pipeline.scalar_field(img, figure=False)
iso = mlab.pipeline.iso_surface(src, contours=0.5)
The iso_surface function generates an instance of IsoSurface. The code in mayavi\modules\iso_surface.py shows that mayavi.components.contour is used. The comments in mayavi\components\contour.py state that it wraps tvtk.ContourFilter. From the code found at tvtk\tvtk_classes.zip\tvtk_classes\contour_filter.py in my local installation, I found this in the __init__ method for the ContourFilter class:
tvtk_base.TVTKBase.__init__(self, vtk.vtkContourFilter, obj, update, **traits)
Looking at the source code for vtkContourFilter and associated documentation on www.vtk.org I don't see a reference to a publication or the name of the algorithm implemented there.
As you've already discovered, Mayavi's iso_surface module uses (eventually) VTK's vtkContourFilter. There are a couple of sentences in the book "Visualization Toolkit: An Object-Oriented Approach to 3D Graphics, 4th Edition" (Schroeder, Martin and Lorensen) that say something about the algorithms used by vtkContourFilter. This is from p.198 of that book:
Contouring in VTK is implemented using variations of the marching
cubes algorithm presented earlier. [...] For example, the tetrahedron
cell type implements "marching tetrahedron" and creates triangle
primitives, while the triangle cell type implements "marching
triangles" and generates line segments.
There's also a vtkMarchingCubes filter that's specific to the case of image data (regularly spaced data on a 1d, 2d or 3d grid); the book goes on to compare execution times between vtkMarchingCubes and vtkContourFilter for a 3d volume.

Categories

Resources