Add multiple markers on each coordinates in flask-googlemaps - python

I tried to do a simple car rental web project using flask, but meet an issue in add multiple markers on coordinates in flask-googlemaps, tried did this according to the tutorial https://github.com/rochacbruno/Flask-GoogleMaps ,
below is my code for add multiple coordinates on google map
catdatas = CarsDataset.query.all()
locations = [d.serializer() for d in catdatas]
carmap = Map(
identifier="carmap",
style="height:500px;width:500px;margin:0;",
lat=locations[0]['lat'],
lng=locations[0]['lng'],
markers=[(loc['lat'], loc['lng']) for loc in locations]
)
each coordinates are successful added, but I don't know how to add multiple markers on it.. thanks in advance!

According to the docs of GoogleMapsFlask, you can put in the "markers" key a list of dictionaries (objects). Example:
[
{
'icon': 'http://maps.google.com/mapfiles/ms/icons/green-dot.png',
'lat': 37.4419,
'lng': -122.1419,
'infobox': "<b>Hello World</b>"
},
{
'icon': 'http://maps.google.com/mapfiles/ms/icons/blue-dot.png',
'lat': 37.4300,
'lng': -122.1400,
'infobox': "<b>Hello World from other place</b>"
}
]

Related

Django aggregation Many To Many into list of dict

It's been hours since I tried to perform this operation but I couldn't figure it out.
Let's say I have a Django project with two classes like these:
from django.db import models
class Person(models.Model):
name=models.CharField()
address=models.ManyToManyField(to=Address)
class Address(models.Model):
city=models.CharField()
zip=models.IntegerField()
So it's just a simple Person having multiple addresses.
Then I create some objects:
addr1=Address.objects.create(city='first', zip=12345)
addr2=Address.objects.create(city='second', zip=34555)
addr3=Address.objects.create(city='third', zip=5435)
person1=Person.objects.create(name='person_one')
person1.address.set([addr1,addr2])
person2=Person.objects.create(name='person_two')
person2.address.set([addr1,addr2,addr3])
Now it comes the hard part, I want to make a single query that will return something like that:
result = [
{
'name': 'person_one',
'addresses': [
{
'city':'first',
'zip': 12345
},
{
'city': 'second',
'zip': 34555
}
]
},
{
'name': 'person_two',
'addresses': [
{
'city':'first',
'zip': 12345
},
{
'city': 'second',
'zip': 34555
},
{
'city': 'third',
'zip': 5435
}
]
}
]
The best i could get was using ArrayAgg and JSONBAgg operators for Django (I'm on POSTGRESQL BY THE WAY):
from django.contrib.postgres.aggregates import JSONBAgg, ArrayAgg
result = Person.objects.values(
'name',
addresses=JSONBAgg('city')
)
But that's not enough, I can't pull a lit of dictionaries out of the query directly as I would like to do, just a list of values or something useless using:
addresses=JSONBAgg(('city','zip'))
which returns a dictionari with random keys and the strings I passed as input as values.
Can someone help me out?
Thanks
If you use postgres, you can do this:
subquery = Address.objects.filter(person_id=OuterRef("pk")).annotate(
data=JSONObject(city=F("city"), zip=F("zip"))
).values_list("data")
persons = Persons.objects.annotate(addresses=ArraySubquery(subquery))
Your requirement: To make an aggregation of customized JSON objects after group_by (values) in Django.
Currently, to my knowledge, Django is not providing any function to aggregate manually created JSON objects. There are a couple of ways to solve this. Firstly, make a customized function which is quite laborious. However, there is another approach that is pretty much easy, using both aggregate functions (ArrayAgg or JSONBAgg) and RawSQL together.
from django.contrib.postgres.aggregates import JSONBAgg, ArrayAgg
result = Person.objects.values('name').annotate(addresses=JSONBAgg(RawSQL("json_build_object('city', city, 'zip', zip)", ())))
I hope it would help you.
person.address already holds a queryset of addresses. From there you can use list-comprehension / model_from_dict to get the values you want.

Strange Plotly behaviour with Choropleth Mapbox

I want to create a choropleth map out of a GeoJSON file that looks like this:
{"type": "FeatureCollection", "features": [
{'type': 'Feature', 'geometry': {'type': 'MultiPolygon', 'coordinates': [[[[... , ...] ... [..., ...]]]]}, 'properties': {'id': 'A'},
{'type': 'Feature', 'geometry': {'type': 'MultiPolygon', 'coordinates': [[[[... , ...] ... [..., ...]]]]}, 'properties': {'id': 'B'},
...
]}
with each id property being different for each feature.
I mapped each feature (by the id property) to it's particular region as it follows:
regions = {
'A': 'MOUNTAINS',
'B': 'BEACH',
...
}
and then created a DataFrame to store each id and each region:
ids = []
for feature in geojson['features']:
ids.append(feature['properties']['id'])
df = pd.DataFrame (ids, columns = ['id'])
df['region'] = df['id'].map(regions)
That returns a DataFrame like this:
id region
0 A MOUNTAIN
1 B BEACH
2 C PLAIN
3 D FOREST
...
I then tried to create a choropleth map with that info:
fig = px.choropleth_mapbox(df, geojson=geojson, color="region",
locations="id", featureidkey="properties.id",
center={"lat": -9.893, "lon": -50.423},
mapbox_style="white-bg", zoom=9)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
However, this results in an excessively long running time, which crashes about a minute or so later, with no error.
I wanted to check if there was something wrong with the GeoJSON file and/or with the mapping, so I assigned random numeric data to each id in df, by:
df['random_number'] = np.random.randint(0,100,size=len(df))
and re-tried the map with the following code:
fig = px.choropleth_mapbox(df, geojson=geojson, color="random_number",
locations="id", featureidkey="properties.id",
center={"lat": -9.893, "lon": -50.423},
mapbox_style="white-bg", zoom=9)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
and it worked, so I am guessing there is some kind of trouble with the non-numeric values in the region column of df, which are not being properly passed to the choropleth map.
Any advice, help or solution will be much appreciated!

Returning an empty dictionary for the datasource of a datatable in plotly/dash

My callback function reads a value selected by the user (site name) and then queries data for that site and returns 3 figures and one dictionary (df.to_dict('records') to supply the data for a datatable.
If the user selects a site for which there is no data, I return {}. That seems to break it. If I select a site, the data table fills in properly, switch to another site, same thing. Once I select a site with no data, the data table will no longer update, no matter which site I select.
Some relevant code:
The output is defined as:
Output('emission_table','data'),
The return from the callback is:
return time_series_figure,emissions_df.to_dict('records'),site_map,hotspot_figure
html.Div(style={'float':'left','padding':'5px','width':'49%'}, children = [
dash_table.DataTable(id='emission_table', data=[],columns=[
# {'id': "site", 'name': "Site"},
{'id': "dateformatted", 'name': "date"},
{'id': "device", 'name': "device"},
{'id': "emission", 'name': "Emission"},
{'id': "methane", 'name': "CH4"},
{'id': "wdir", 'name': "WDIR"},
{'id': "wspd", 'name': "WSPD"},
{'id': "wd_std", 'name': "WVAR"}],
# {'id': "url", 'name':'(Link for Google Maps)','presentation':'markdown'}],
fixed_rows={'headers': True},
row_selectable='multi',
style_table={'height': '500px', 'overflowY': 'auto'},
style_cell={'textAlign': 'left'})
]),
Any ideas what is happening? Is there a better way for the callback to return an empty data source for the datatable?
Thanks!
You haven't shared enough of your code (your callback specifically) to see what is happening exactly, however:
If the user selects a site for which there is no data, I return {}
is at least one reason why it doesn't work. The data property of a Dash Datatable needs to be a list and not a dictionary. You can however put dictionaries inside the list. Each dictionary inside the list corresponds to a row in the Data Table.
So to re-iterate and more directly answer your question:
Is there a better way for the callback to return an empty data source for the datatable?
Yes you can return a list with any number of dictionaries inside.

Pandas DataFrame.apply: create new column with data from two columns

I have a DataFrame (df) like this:
PointID Time geojson
---- ---- ----
36F 2016-04-01T03:52:30 {'type': 'Point', 'coordinates': [3.961389, 43.123]}
36G 2016-04-01T03:52:50 {'type': 'Point', 'coordinates': [3.543234, 43.789]}
The geojson column contains data in geoJSON format (esentially, a Python dict).
I want to create a new column in geoJSON format, which includes the time coordinate. In other words, I want to inject the time information into the geoJSON info.
For a single value, I can successfully do:
oldjson = df.iloc[0]['geojson']
newjson = [df['coordinates'][0], df['coordinates'][1], df.iloc[0]['time'] ]
For a single parameter, I successfully used dataFrame.apply in combination with lambda (thanks to SO: related question
But now, I have two parameters, and I want to use it on the whole DataFrame. As I am not confident with the .apply syntax and lambda, I do not know if this is even possible. I would like to do something like this:
def inject_time(geojson, time):
"""
Injects Time dimension into geoJSON coordinates. Expects a dict in geojson POINT format.
"""
geojson['coordinates'] = [geojson['coordinates'][0], geojson['coordinates'][1], time]
return geojson
df["newcolumn"] = df["geojson"].apply(lambda x: inject_time(x, df['time'])))
...but that does not work, because the function would inject the whole series.
EDIT:
I figured that the format of the timestamped geoJSON should be something like this:
TimestampedGeoJson({
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "LineString",
"coordinates": [[-70,-25],[-70,35],[70,35]],
},
"properties": {
"times": [1435708800000, 1435795200000, 1435881600000]
}
}
]
})
So the time element is in the properties element, but this does not change the problem much.
You need DataFrame.apply with axis=1 for processing by rows:
df['new'] = df.apply(lambda x: inject_time(x['geojson'], x['Time']), axis=1)
#temporary display long string in column
with pd.option_context('display.max_colwidth', 100):
print (df['new'])
0 {'type': 'Point', 'coordinates': [3.961389, 43.123, '2016-04-01T03:52:30']}
1 {'type': 'Point', 'coordinates': [3.543234, 43.789, '2016-04-01T03:52:50']}
Name: new, dtype: object

PyMongo: group with 2d geospatial index in conditions returns an error

The error returned is:
exception: manual matcher config not allowed
Here's my code:
cond = {'id': id, 'date': {'$gte': start_date}, 'date': {'$lte': end_date}, 'location': {'$within': {'$box': box }}}
reduce = 'function(obj, prev) { prev.count++; }'
rows = collection.group({'location': True}, cond, {'count': 0}, reduce)
When I remove location from condition then it works fine. If I change the query to find it works fine too so it's a problem with group.
What am I doing wrong?
MongoDB currently (version 1.6.2) doesn't support geo queries for mapreduce and group functions. See http://jira.mongodb.org/browse/SERVER-1742 for the issue ticket (and consider voting it up).

Categories

Resources