I want to visualize geo-data with timestamp information on it. I use plugins.TimestampedGeoJson in my python code to do that. the code is working actually but i need to display more than 1 feature in the map and each of these features can be shown/hidden from LayerControl.
is there any idea how to do that ?
#example with one feature
features_3 = []
for row in DF_3.itertuples():
long = row.longitude
lat = row.latitude
data= row.data
features_3 .append (
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [long,lat],
},
"properties": {
"time": str(row.xdate),
"popup": "Record : " + str(data) ,
"icon": "circle",
"iconstyle": {
"fillColor": color_scale(data),
"fillOpacity": 0.6,
"stroke": "false",
"radius": 8,
},
"style": {"weight": 0},
},
}
)
plugins.TimestampedGeoJson(
{"type": "FeatureCollection", "features": features_3 },
period="P1D",
add_last_point=True,
auto_play=True,
loop=False,
max_speed=1,
loop_button=True,
time_slider_drag_update=True,
duration="P1D",
).add_to(map)
LayerControl().add_to(map)
TimestampedGeoJson duration parameter causes polygons to disappear
The code in this link might help you. You can see how it's adding multiple polygon layers in the same map.
Related
I am trying to solve a particular case of comparison of polygons to others. I have five polygons distributed as in the figure below. The black polygon is the one with the largest area.
There may be other similar cases, the main rule is to remove the smallest polygons among all those that have one or more side portions in common.
The data for this case are in a GeoJson file as follows:
{"type":"FeatureCollection","features":[
{"type":"Feature","properties":{"id":1},"geometry":{"type":"Polygon","coordinates":[[[3.4545135498046875,45.533288879467456],[3.4960556030273433,45.533288879467456],[3.4960556030273433,45.57055337226086],[3.4545135498046875,45.57055337226086],[3.4545135498046875,45.533288879467456]]]}},
{"type":"Feature","properties":{"id":2},"geometry":{"type":"Polygon","coordinates":[[[3.4545135498046875,45.52917023833511],[3.4960556030273433,45.52917023833511],[3.4960556030273433,45.53891018749409],[3.4545135498046875,45.53891018749409],[3.4545135498046875,45.52917023833511]]]}},
{"type":"Feature","properties":{"id":3},"geometry":{"type":"Polygon","coordinates":[[[3.4845542907714844,45.5298015824607],[3.5159683227539062,45.5298015824607],[3.5159683227539062,45.543388795387294],[3.4845542907714844,45.543388795387294],[3.4845542907714844,45.5298015824607]]]}},
{"type":"Feature","properties":{"id":4},"geometry":{"type":"Polygon","coordinates":[[[3.465328216552734,45.542667432984864],[3.4735679626464844,45.542667432984864],[3.4735679626464844,45.5478369923404],[3.465328216552734,45.5478369923404],[3.465328216552734,45.542667432984864]]]}},
{"type":"Feature","properties":{"id":5},"geometry":{"type":"Polygon","coordinates":[[[3.4545138850808144,45.56799974017372],[3.4588050842285156,45.56799974017372],[3.4588050842285156,45.57055290285386],[3.4545138850808144,45.57055290285386],[3.4545138850808144,45.56799974017372]]]}}]}
Is there a solution to delete only the two blue polygons(id 2 and 5)? In python.
By transforming the Polygons into LineString one could look if a Linestring is a portion of another Linestring ? But I don't see how to do it. Or maybe looking to see if the LineString of the black and blue polygons have more than two points in common ? But we can't convert a LineString into more than two points.
The following approach may work for you using shared_paths which correctly calls out the path overlap between polygons 1, 2 and 5:
import json
import shapely as sh
import shapely.ops as ops
import shapely.geometry as geo
with open('./test.json') as f:
features = json.load(f)['features']
for f1 in features:
for f2 in features:
id1 = f1['properties']['id']
id2 = f2['properties']['id']
if int(id1) > int(id2):
s1 = geo.shape(f1['geometry'])
s2 = geo.shape(f2['geometry'])
coll = ops.shared_paths(s1.boundary, s2.boundary)
if not coll.is_empty:
print(f"{id1} and {id2} have shared path")
# update your feature collection etc
I had to reduce the precision to 5 decimal places in your feature geometry for this to work as initially it only detects the overlap between polygon 1 and 2. The shared corner between polygon 1 and 5 is slightly out in your input FeatureCollection:
{
"type": "FeatureCollection",
"features": [{
"type": "Feature",
"properties": {
"id": 1
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[3.45451, 45.53328],
[3.49605, 45.53328],
[3.49605, 45.57055],
[3.45451, 45.57055],
[3.45451, 45.53328]
]
]
}
},
{
"type": "Feature",
"properties": {
"id": 2
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[3.45451, 45.52917],
[3.49605, 45.52917],
[3.49605, 45.53891],
[3.45451, 45.53891],
[3.45451, 45.52917]
]
]
}
},
{
"type": "Feature",
"properties": {
"id": 3
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[3.48455, 45.52980],
[3.51596, 45.52980],
[3.51596, 45.54338],
[3.48455, 45.54338],
[3.48455, 45.52980]
]
]
}
},
{
"type": "Feature",
"properties": {
"id": 4
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[3.465328, 45.54266],
[3.473567, 45.54266],
[3.473567, 45.54783],
[3.465328, 45.54783],
[3.465328, 45.54266]
]
]
}
},
{
"type": "Feature",
"properties": {
"id": 5
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[3.454513, 45.56799],
[3.458805, 45.56799],
[3.458805, 45.57055],
[3.454513, 45.57055],
[3.454513, 45.56799]
]
]
}
}
]
}
I want to plot some polygons contained in a GeoJson file. Is it possible to visualize a GeoJson file in Plotly that is not linked directly to a real world location?
As example I can use GeoPandas to plot a generic GeoJson file:
import json
geodata = json.loads(
"""{ "type": "FeatureCollection",
"features": [
{ "type": "Feature",
"geometry": {"type": "Polygon", "coordinates": [[[0,0],[0,1],[1,1]]]},
"properties": {"id": "upper_left"}
},
{ "type": "Feature",
"geometry": {"type": "Polygon", "coordinates": [[[0,0],[1,1],[1,0]]]},
"properties": {"id": "lower_right"}
}
]
}""")
import geopandas as gpd
df_shapes = gpd.GeoDataFrame.from_features(geodata["features"])
df_shapes.plot(color="none")
The result displays the two polygons (triangles) contained in the GeoJson:
How would I plot the same map using Plotly? This answer suggests to use scope to limit the base map that is shown. What can be done if there is no base map?
(I am not asking how to plot a square with a line. The GeoJson is just a simplified example.)
plotly shapes can be drawn.
using traces
It's then a case of list / dict comprehensions to restructure geojson polygons to plotly structure
import json
geodata = json.loads(
"""{ "type": "FeatureCollection",
"features": [
{ "type": "Feature",
"geometry": {"type": "Polygon", "coordinates": [[[0,0],[0,1],[1,1]]]},
"properties": {"id": "upper_left"}
},
{ "type": "Feature",
"geometry": {"type": "Polygon", "coordinates": [[[0,0],[1,1],[1,0]]]},
"properties": {"id": "lower_right"}
}
]
}"""
)
go.Figure(
[
go.Scatter(
**{
"x": [p[0] for p in f["geometry"]["coordinates"][0]],
"y": [p[1] for p in f["geometry"]["coordinates"][0]],
"fill": "toself",
"name": f["properties"]["id"],
}
)
for f in geodata["features"]
]
).update_layout(height=200, width=200, showlegend=False, margin={"l":0,"r":0,"t":0,"b":0})
using shapes
use geopandas geometry to get SVG then extract path
add theses polygons as shapes onto layout
from bs4 import BeautifulSoup
# input to plotly is path. use shapely geometry svg path for this
df_shapes = df_shapes.assign(
svgpath=df_shapes["geometry"].apply(
lambda p: BeautifulSoup(p.svg()).find("path")["d"]
)
)
go.Figure(
layout=dict(
height=200,
width=200,
showlegend=False,
margin={"l": 0, "r": 0, "t": 0, "b": 0},
xaxis={"range": [0, 1]},
yaxis={"range": [0, 1]},
shapes=[{"type": "path", "path": p} for p in df_shapes["svgpath"]],
)
)
I am trying to indexing geojson file into elasticsearch (version 7.6.2) using Python.
Here is the mapping I defined in elasticsearch.
'mappings': {
"properties": {
"geometry": {
"properties": {
"coordinates": {
"type": "geo_shape"
},
"type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
}
}
The geojson file looks like this:
{
"type": "FeatureCollection",
"name": "testting",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } },
"features": [
{ "type": "Feature", "properties": { "LEGEND": "x_1", "THRESHOLD": -109, "COLOR": "0 0 255", "Prediction": "Coverage" }, "geometry": { "type": "MultiPolygon", "coordinates": [ [ [ [ 151.20061069847705, -33.886918725260998 ], [ 151.200620164862698, -33.886467994010133 ].....
However, when I write the file to Elasticsearch, inspired from this link:
How to index geojson file in elasticsearch?
def geojson2es(gj):
for feature in gj['features']:
yield feature
with open(input_path+'/'+ data) as f:
gj = json.load(f)
es = Elasticsearch(hosts=[{'host': 'localhost', 'port': 9200}])
k = [{
"_index": "test",
"_source": feature,
} for feature in geojson2es(gj)]
helpers.bulk(es, k)
I have got this error:
{'type': 'mapper_parsing_exception',
'reason': 'failed to parse field [geometry.coordinates] of type [geo_shape]', '
caused_by':
{'type': 'parse_exception', 'reason': 'shape must be an object consisting of type and coordinates'}}
Did anyone encounter a similar issue? How can I fix it?
Your mapping is not correct. The geo_shape type already implies type and coordinates, so you don't need to declare them again.
So your mapping should be like this instead, i.e. each feature has a type (e.g. Feature, a hash of properties and a geometry of type geo_shape):
{
"mappings": {
"properties": {
"type": {
"type": "keyword"
},
"properties": {
"type": "object"
},
"geometry": {
"type": "geo_shape"
}
}
}
}
I have this geojason file
{
"type": "FeatureCollection",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } },
"features": [
{ "type": "Feature", "properties": { "visit_date": "2013-03-27Z", "name": "Mayi-Tatu", "n_workers": 150.0, "mineral": "Gold" }, "geometry": { "type": "Point", "coordinates": [ 29.66033, 1.01089 ] } },
{ "type": "Feature", "properties": { "visit_date": "2013-03-27Z", "name": "Mabanga", "n_workers": 115.0, "mineral": "Gold" }, "geometry": { "type": "Point", "coordinates": [ 29.65862, 1.00308 ] } },
{ "type": "Feature", "properties": { "visit_date": "2013-03-27Z", "name": "Molende", "n_workers": 130.0, "mineral": "Gold" }, "geometry": { "type": "Point", "coordinates": [ 29.65629, 0.98563 ] } },
...
{ "type": "Feature", "properties": { "visit_date": "2017-08-31Z", "name": "Kambasha", "n_workers": 37.0, "mineral": "Cassiterite" }, "geometry": { "type": "Point", "coordinates": [ 29.05973167, -2.25938167 ] } }
]
}
I read this file, with the next code:
filename = "ipis_cod_mines.geojson"
df_congomines_crs84_geo = gpd.read_file(filename)
But when I check the crs property of df_congomines_crs84_geo,
df_congomines_crs84_geo.crs
I got "{'init': 'epsg:4326'}", I don't understand why i don't get the right crs. (first question)
After, I read another dataset for the same area (both data belongs to congo)
df_countries_4326_geo = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
This dataset has crs equal to {'init': 'epsg:4326'}.
When i plot both datasets (without change the crs),
ax = congo_df.plot(alpha=0.5, color='brown', figsize=(11,4))
df_congomines_crs84_geo.plot(ax=ax, column='mineral')
plt.show()
I got the next image:
Image result
Why both image are not overlaped if they belong to the same area??? How can I fix it??? Is this problem related to the UTM zone???(second question)
CRS84 is equivalent to WGS84 for which the standard EPSG code is EPSG:4326. CRS84 was defined in an old geojson spec (2008). Reading a geojson file gives EPSG:4326 as the CRS.
A GeoJson with point features contains two attributes: City and Rating.
City as identifier is never changing, but Rating will be updated on a regular basis.
The new Rating is stored in a dictionary as vales ("dnew").
My for loop is not working well. Please see the code below, where "#here is the problem" marks the problem which I cannot solve.
import json
dnew = {"Budapest": "fair", "New York": "very good", "Rome": "awesome"}
data = {
"type": "FeatureCollection",
"name": "Cities",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } },
"features": [
{ "type": "Feature", "properties": { "City": "New York", "Rating": "good" }, "geometry": { "type": "Point", "coordinates": [ -73.991836734693834, 40.736734693877537 ] } },
{ "type": "Feature", "properties": { "City": "Rome", "Rating": "fair" }, "geometry": { "type": "Point", "coordinates": [ 12.494557823129199, 41.903401360544223 ] } },
{ "type": "Feature", "properties": { "City": "Budapest", "Rating": "awesome" }, "geometry": { "type": "Point", "coordinates": [ 19.091836734693832, 47.494557823129256 ] } }
]
}
#at this point, keys of two dictionaies are compared. If they are the same, the value of the old dict is updated/replaced by the value of the new dict
for key in data["features"]:
citykey = (key["properties"]["City"])
ratingvalue = (key["properties"]["Rating"])
#print(citykey + "| " + ratingvalue)
for keynew in dnew:
citynew = (keynew)
ratingnew = dnew[keynew]
#print(citynew + " | " + ratingnew)
print(citykey + "==" + citynew)
if citykey == citynew:
#
#here is the problem
#
data["features"]["properties"]["Rating"] = ratingnew
print(True)
else:
print(False)
Error Message:
TypeError: list indices must be integers or slices, not str
Thank you!
It misses a number index after "features" as it's a list not a dictionary.
data["features"][0]["properties"]["Rating"]
You're missing the benefits of dictionaries by looping over all the keys in the dnew dictionary for each element in the data['features'] list.
E.Coms noted the problem you're having but that only checks the first item in the list, (data["features"][0])
Perhaps the following will solve your problem.
for key in data["features"]:
citykey = (key["properties"]["City"])
ratingvalue = (key["properties"]["Rating"])
#print(citykey + "| " + ratingvalue)
if citykey in dnew:
key["properties"]["Rating"] = dnew[citykey]