Mapbox Distances with Python - python

I am trying to calculate distances among given points with Mapbox in Python. I read the documentation and some examples and I came up with the following code.
from mapbox import Distance
glinavos = {
'type': 'Feature',
'properties': {'name': 'glinavos'},
'geometry': {
'type': 'Point',
'coordinates': [39.754598, 20.654121]}}
zoinos = {
'type': 'Feature',
'properties': {'name': 'zoinos'},
'geometry': {
'type': 'Point',
'coordinates': [39.754204, 20.640761]}}
katogi = {
'type': 'Feature',
'properties': {'name': 'katogi'},
'geometry': {
'type': 'Point',
'coordinates': [39.776992, 21.180688]}}
myDistance = Distance(access_token="pk.eyJ1IjoiaWxpdHNlIiwiYSI6ImNpenZmcm11YjAwMGQyd2x1Nm9nd2pqcGUifQ.1PZaOWTVajnQZGeBb_x1Bw")
result=myDistance.distances([glinavos,zoinos,katogi], 'driving')
It keeps returning a 403 error while everything else seems fine. The three test points are real places and I have tried with two access tokens: my public and my secret (more privileges) one. In addition, I have tried calling the service with the same points and tokens through curl and it worked fine. What is wrong with my script? In the code above I use the public token.

Related

How to load a nested json file into a pandas DataFrame

please help I cannot seem to get the json data into a Dataframe.
loaded the data
data =json.load(open(r'path'))#this works fine and displays:
json data
{'type': 'FeatureCollection', 'name': 'Altstadt Nord', 'crs': {'type': 'name', 'properties': {'name': 'urn:ogc:def:crs:OGC:1.3:CRS84'}}, 'features': [{'type': 'Feature', 'properties': {'Name': 'City-Martinsviertel', 'description': None}, 'geometry': {'type': 'Polygon', 'coordinates': [[[6.9595637, 50.9418396], [6.956624, 50.9417382], [6.9543173, 50.941603], [6.9529869, 50.9413664], [6.953062, 50.9408593], [6.9532873, 50.9396289], [6.9533624, 50.9388176], [6.9529333, 50.9378373], [6.9527509, 50.9371815], [6.9528367, 50.9360659], [6.9532122, 50.9352884], [6.9540705, 50.9350653], [6.9553258, 50.9350044], [6.9568815, 50.9351667], [6.9602074, 50.9355047], [6.9608189, 50.9349165], [6.9633939, 50.9348827], [6.9629433, 50.9410622], [6.9616236, 50.9412176], [6.9603898, 50.9414881], [6.9595637, 50.9418396]]]}}, {'type': 'Feature', 'properties': {'Name': 'Gereonsviertel', 'description': None}, 'geometry': {'type': 'Polygon', 'coordinates': [[[6.9629433, 50.9410622], [6.9629433, 50.9431646], [6.9611408, 50.9433539], [6.9601752, 50.9436649], [6.9588234, 50.9443409], [6.9579651, 50.9449763], [6.9573213, 50.945801], [6.9563128, 50.9451926], [6.9551756, 50.9448546], [6.9535663, 50.9446518], [6.9523432, 50.9449763], [6.9494464, 50.9452602], [6.9473435, 50.9454495], [6.9466998, 50.9456928], [6.9458415, 50.946531], [6.9434168, 50.9453954], [6.9424726, 50.9451926], [6.9404342, 50.9429888], [6.9404771, 50.9425156], [6.9403269, 50.9415016], [6.9400479, 50.9405281], [6.9426228, 50.9399872], [6.9439103, 50.9400143], [6.9453051, 50.9404875], [6.9461634, 50.9408931], [6.9467427, 50.941096], [6.9475581, 50.9410013], [6.9504227, 50.9413191], [6.9529869, 50.9413664], [6.9547464, 50.9416368], [6.9595637, 50.9418396], [6.9603898, 50.9414881], [6.9616236, 50.9412176], [6.9629433, 50.9410622]]]}}, {'type': 'Feature', 'properties': {'Name': 'Kunibertsviertel', 'description': None}, 'geometry': {'type': 'Polygon', 'coordinates': [[[6.9629433, 50.9431646], [6.9637129, 50.9454917], [6.9651506, 50.9479252], [6.9666097, 50.9499124], [6.9667599, 50.9500882], [6.9587777, 50.9502504], [6.9573213, 50.945801], [6.9579651, 50.9449763], [6.9588234, 50.9443409], [6.9601752, 50.9436649], [6.9611408, 50.9433539], [6.9629433, 50.9431646]]]}}, {'type': 'Feature', 'properties': {'Name': 'Nördlich Neumarkt', 'description': None}, 'geometry': {'type': 'Polygon', 'coordinates': [[[6.9390331, 50.9364418], [6.9417153, 50.9358738], [6.9462214, 50.9358062], [6.9490109, 50.9355628], [6.9505129, 50.9353329], [6.9523798, 50.9352924], [6.9532122, 50.9352884], [6.9528367, 50.9360659], [6.9527509, 50.9371815], [6.9529333, 50.9378373], [6.9533624, 50.9388176], [6.9532381, 50.9398222], [6.9529869, 50.9413664], [6.9504227, 50.9413191], [6.9475581, 50.9410013], [6.9467427, 50.941096], [6.9453051, 50.9404875], [6.9439103, 50.9400143], [6.9424663, 50.9399574], [6.9400479, 50.9405281], [6.9390331, 50.9364418]]]}}]}
now i cannot seem to fit it into a Dataframe //
pd.DataFrame(data) --> ValueError: Mixing dicts with non-Series may lead to ambiguous ordering.full error
I tried to flatten with json_flatten but ModuleNotFoundError: No module named 'flatten_json' even though I installed json-flatten via pip install
also tried df =pd.DataFrame.from_dict(data,orient='index')
df
Out[22]:
0
type FeatureCollection
name Altstadt Nord
crs {'type': 'name', 'properties': {'name': 'urn:o...
features [{'type': 'Feature', 'properties': {'Name': 'C...
df Out[22]
I think you can use json_normalize to load them to pandas.
test.json in this case is your full json file (with double quotes).
import json
from pandas.io.json import json_normalize
with open('path_to_json.json') as f:
data = json.load(f)
df = json_normalize(data, record_path=['features'], meta=['name'])
print(df)
This results in a dataframe as shown below.
You can further add record field in the normalize method to create more columns for the polygon coordinates.
You can find more documentation at https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.json_normalize.html
Hope that helps.
The json data contains elements with different datatypes and these cannot be loaded into one single dataframe.
View datatypes in the json:
[type(data[k]) for k in data.keys()]
# Out: [str, str, dict, list]
data.keys()
# Out: dict_keys(['type', 'name', 'crs', 'features'])
You can load each single chunk of data in a separate dataframe like this:
df_crs = pd.DataFrame(data['crs'])
df_features = pd.DataFrame(data['features'])
data['type'] and data['name'] are strings
data['type']
# Out 'FeatureCollection'
data['name']
# Out 'Altstadt Nord'

Why does apispec validation fail on format similar to documentation example for Python Flask API backend?

I am using apispec in a Python/Flask API backend.
i followed the format found in the apispec documentation example.
See: https://apispec.readthedocs.io/en/latest/
Can anyone tell me why I am getting a validation error with the below json schema and data? It says "responses" is required but it looks like it is there. Is the structure incorrect? Any help appreciated!
openapi_spec_validator.exceptions.OpenAPIValidationError: 'responses' is a required propertyFailed validating 'required' in schema['properties']['paths']['patternProperties']['^/']['properties']['get']:
{'additionalProperties': False,
'description': 'Describes a single API operation on a path.',
'patternProperties': {'^x-': {'$ref': '#/definitions/specificationExtension'}},
'properties': {'callbacks': {'$ref': '#/definitions/callbacksOrReferences'},
'deprecated': {'type': 'boolean'},
'description': {'type': 'string'},
'externalDocs': {'$ref': '#/definitions/externalDocs'},
'operationId': {'type': 'string'},
'parameters': {'items': {'$ref': '#/definitions/parameterOrReference'},
'type': 'array',
'uniqueItems': True},
'requestBody': {'$ref': '#/definitions/requestBodyOrReference'},
'responses': {'$ref': '#/definitions/responses'},
'security': {'items': {'$ref': '#/definitions/securityRequirement'},
'type': 'array',
'uniqueItems': True},
'servers': {'items': {'$ref': '#/definitions/server'},
'type': 'array',
'uniqueItems': True},
'summary': {'type': 'string'},
'tags': {'items': {'type': 'string'},
'type': 'array',
'uniqueItems': True}},
'required': ['responses'],
'type': 'object'}
On instance['paths']['/v1/activity']['get']:
{'get': {'description': 'Activity Get',
'responses': {'200': {'content': {'application/json': {'schema': 'ActivitySchema'}},
'description': 'success'}},
'security': [{'AccessTokenAuth': []}],
'tags': ['user', 'admin']}}
For reference, here is the original source comment that the data comes from:
"""
---
get:
description: Activity Get
responses:
200:
description: success
content:
application/json:
schema: ActivitySchema
security:
- AccessTokenAuth: []
tags:
- user
- admin
"""
I found the answer in the apispec documentation at:
https://apispec.readthedocs.io/en/latest/using_plugins.html#example-flask-and-marshmallow-plugins
where it says:
"If your API uses method-based dispatching, the process is similar. Note that the method no longer needs to be included in the docstring."
This is slightly misleading since it's not "no longer needs to be included" but rather "cannot be included".
So the correct doc string is:
"""
---
description: Activity Get
responses:
200:
description: success
content:
application/json:
schema: ActivitySchema
security:
- AccessTokenAuth: []
tags:
- user
- admin
"""

Iterate through dictionary to replace leading zeros?

I want to iterate through this dictionary and find any 'id' that has a leading zero, like the one below, and replace it without the zero. So 'id': '01001' would become 'id': '1001'
Here is how to get the data I'm working with:
from urllib.request import urlopen
import json
with urlopen('https://raw.githubusercontent.com/plotly/datasets/master/geojson-counties-fips.json') as response:
counties = json.load(response)
so far I've been able to get one ID at a time, but not sure how to loop through to get all of the IDs:
My code so far: counties['features'][0]['id']
{ 'type': 'FeatureCollection',
'features': [{'type': 'Feature',
'properties': {'GEO_ID': '0500000US01001',
'STATE': '01',
'COUNTY': '001',
'NAME': 'Autauga',
'LSAD': 'County',
'CENSUSAREA': 594.436},
'geometry': {'type': 'Polygon',
'coordinates': [[[-86.496774, 32.344437],
[-86.717897, 32.402814],
[-86.814912, 32.340803],
[-86.890581, 32.502974],
[-86.917595, 32.664169],
[-86.71339, 32.661732],
[-86.714219, 32.705694],
[-86.413116, 32.707386],
[-86.411172, 32.409937],
[-86.496774, 32.344437]]]},
'id': '01001'}
]
}
from urllib.request import urlopen
import json
with urlopen('https://raw.githubusercontent.com/plotly/datasets/master/geojson-counties-fips.json') as response:
counties = json.load(response)
Then iterate over the list if id's with your JSON structure. And update the id
as
counties['features'][0]['id'] = counties['features'][0]['id'].lstrip("0")
lstrip will remove leading zeroes from the string.
Suppose your dictionary counties has the following data. You can use the following code:
counties={'type': 'FeatureCollection',
'features': [ {'type': 'Feature','properties': {'GEO_ID': '0500000US01001','STATE': '01','COUNTY': '001','NAME': 'Autauga', 'LSAD': 'County','CENSUSAREA': 594.436},
'geometry': {'type': 'Polygon','coordinates': [[[-86.496774, 32.344437],[-86.717897, 32.402814],[-86.814912, 32.340803],
[-86.890581, 32.502974],
[-86.917595, 32.664169],
[-86.71339, 32.661732],
[-86.714219, 32.705694],
[-86.413116, 32.707386],
[-86.411172, 32.409937],
[-86.496774, 32.344437] ]] } ,'id': '01001'}, {'type': 'Feature','properties': {'GEO_ID': '0500000US01001','STATE': '01','COUNTY': '001','NAME': 'Autauga', 'LSAD': 'County','CENSUSAREA': 594.436},
'geometry': {'type': 'Polygon','coordinates': [[[-86.496774, 32.344437],[-86.717897, 32.402814],[-86.814912, 32.340803],
[-86.890581, 32.502974],
[-86.917595, 32.664169],
[-86.71339, 32.661732],
[-86.714219, 32.705694],
[-86.413116, 32.707386],
[-86.411172, 32.409937],
[-86.496774, 32.344437] ]] } ,'id': '000000000001001'} ]}
for feature in counties['features']:
feature ['id']=feature ['id'].lstrip("0")
print(counties)
Here is shorter and faster way of doing this using json object hooks,
def stripZeroes(d):
if 'id' in d:
d['id'] = d['id'].lstrip('0')
return d
return d
with urlopen('https://raw.githubusercontent.com/plotly/datasets/master/geojson-counties-fips.json') as response:
counties = json.load(response, object_hook=stripZeroes)

How can I print specific integer variables in a nested dictionary by using Python?

This is my first question :)
I loop over a nested dictionary to print specific values. I am using the following code.
for i in lizzo_top_tracks['tracks']:
print('Track Name: ' + i['name'])
It works for string variables, but does not work for other variables. For example, when I use the following code for the date variable:
for i in lizzo_top_tracks['tracks']:
print('Album Release Date: ' + i['release_date'])
I receive a message like this KeyError: 'release_date'
What should I do?
Here is a sample of my nested dictionary:
{'tracks': [{'album': {'album_type': 'album',
'artists': [{'external_urls': {'spotify': 'https://open.spotify.com/artist/56oDRnqbIiwx4mymNEv7dS'},
'href': 'https://api.spotify.com/v1/artists/56oDRnqbIiwx4mymNEv7dS',
'id': '56oDRnqbIiwx4mymNEv7dS',
'name': 'Lizzo',
'type': 'artist',
'uri': 'spotify:artist:56oDRnqbIiwx4mymNEv7dS'}],
'external_urls': {'spotify': 'https://open.spotify.com/album/74gSdSHe71q7urGWMMn3qB'},
'href': 'https://api.spotify.com/v1/albums/74gSdSHe71q7urGWMMn3qB',
'id': '74gSdSHe71q7urGWMMn3qB',
'images': [{'height': 640,
'width': 640}],
'name': 'Cuz I Love You (Deluxe)',
'release_date': '2019-05-03',
'release_date_precision': 'day',
'total_tracks': 14,
'type': 'album',
'uri': 'spotify:album:74gSdSHe71q7urGWMMn3qB'}]}
The code you posted isn't syntactically correct; running it through a Python interpreter gives a syntax error on the last line. It looks like you lost a curly brace somewhere toward the end. :)
I went through it and fixed up the white space to make the structure easier to see; the way you had it formatted made it hard to see which keys were at which level of nesting, but with consistent indentation it becomes much clearer:
lizzo_top_tracks = {
'tracks': [{
'album': {
'album_type': 'album',
'artists': [{
'external_urls': {
'spotify': 'https://open.spotify.com/artist/56oDRnqbIiwx4mymNEv7dS'
},
'href': 'https://api.spotify.com/v1/artists/56oDRnqbIiwx4mymNEv7dS',
'id': '56oDRnqbIiwx4mymNEv7dS',
'name': 'Lizzo',
'type': 'artist',
'uri': 'spotify:artist:56oDRnqbIiwx4mymNEv7dS'
}],
'external_urls': {
'spotify': 'https://open.spotify.com/album/74gSdSHe71q7urGWMMn3qB'
},
'href': 'https://api.spotify.com/v1/albums/74gSdSHe71q7urGWMMn3qB',
'id': '74gSdSHe71q7urGWMMn3qB',
'images': [{'height': 640, 'width': 640}],
'name': 'Cuz I Love You (Deluxe)',
'release_date': '2019-05-03',
'release_date_precision': 'day',
'total_tracks': 14,
'type': 'album',
'uri': 'spotify:album:74gSdSHe71q7urGWMMn3qB'
}
}]
}
So the first (and only) value you get for i in lizzo_top_tracks['tracks'] is going to be this dictionary:
i = {
'album': {
'album_type': 'album',
'artists': [{
'external_urls': {
'spotify': 'https://open.spotify.com/artist/56oDRnqbIiwx4mymNEv7dS'
},
'href': 'https://api.spotify.com/v1/artists/56oDRnqbIiwx4mymNEv7dS',
'id': '56oDRnqbIiwx4mymNEv7dS',
'name': 'Lizzo',
'type': 'artist',
'uri': 'spotify:artist:56oDRnqbIiwx4mymNEv7dS'
}],
'external_urls': {
'spotify': 'https://open.spotify.com/album/74gSdSHe71q7urGWMMn3qB'
},
'href': 'https://api.spotify.com/v1/albums/74gSdSHe71q7urGWMMn3qB',
'id': '74gSdSHe71q7urGWMMn3qB',
'images': [{'height': 640, 'width': 640}],
'name': 'Cuz I Love You (Deluxe)',
'release_date': '2019-05-03',
'release_date_precision': 'day',
'total_tracks': 14,
'type': 'album',
'uri': 'spotify:album:74gSdSHe71q7urGWMMn3qB'
}
}
The only key in this dictionary is 'album', the value of which is another dictionary that contains all the other information. If you want to print, say, the album release date and a list of the artists' names, you'd do:
for track in lizzo_top_tracks['tracks']:
print('Album Release Date: ' + track['album']['release_date'])
print('Artists: ' + str([artist['name'] for artist in track['album']['artists']]))
If these are dictionaries that you're building yourself, you might want to remove some of the nesting layers where there's only a single key, since they just make it harder to navigate the structure without giving you any additional information. For example:
lizzo_top_albums = [{
'album_type': 'album',
'artists': [{
'external_urls': {
'spotify': 'https://open.spotify.com/artist/56oDRnqbIiwx4mymNEv7dS'
},
'href': 'https://api.spotify.com/v1/artists/56oDRnqbIiwx4mymNEv7dS',
'id': '56oDRnqbIiwx4mymNEv7dS',
'name': 'Lizzo',
'type': 'artist',
'uri': 'spotify:artist:56oDRnqbIiwx4mymNEv7dS'
}],
'external_urls': {
'spotify': 'https://open.spotify.com/album/74gSdSHe71q7urGWMMn3qB'
},
'href': 'https://api.spotify.com/v1/albums/74gSdSHe71q7urGWMMn3qB',
'id': '74gSdSHe71q7urGWMMn3qB',
'images': [{'height': 640, 'width': 640}],
'name': 'Cuz I Love You (Deluxe)',
'release_date': '2019-05-03',
'release_date_precision': 'day',
'total_tracks': 14,
'type': 'album',
'uri': 'spotify:album:74gSdSHe71q7urGWMMn3qB'
}]
This structure allows you to write the query the way you were originally trying to do it:
for album in lizzo_top_albums:
print('Album Release Date: ' + album['release_date'])
print('Artists: ' + str([artist['name'] for artist in album['artists']]))
Much simpler, right? :)

Python Eve not following set schema and returning full document when using aggregation pipeline

I have a simple api in which coordinates and distance are provided and a and documents from within that distance are returned. I intend it to return just the id and distance but the defined schema is being ignored and the whole document is being returned. Any ideas?
item = {'item_title': 'relate',
'datasource': {
'source': 'api',
'filter': {'_type': 'line'},
'aggregation': {'pipeline': [{'$geoNear':{'near':{'type': 'point', 'coordinates': '$coords'},'distanceField': 'distance','maxDistance': '$maxDist','num': 1, 'spherical': 'true'}}]}
},
'schema': {
'_id': {'type': 'string'},
'distance': {'type': 'float'}
},
}
DOMAIN = {"data": item}
and the postman query is:
http://localhost:8090/data?aggregate={"$maxDist": 500, "$coords": [-1.47, 50.93]}
EDIT:
Following Neil's comment I tried this:
item = {'item_title': 'relate',
'schema': {
'uri': {'type': 'string'},
'distance': {'type': 'float'}
},
'datasource': {
'source': 'api',
'filter': {'_type': 'link'},
'aggregation': {'pipeline': [{'$geoNear':{'near':{'type': 'point', 'coordinates': ['$lng', '$lat']},'distanceField': 'distance','maxDistance': '$maxDist','num': 1, 'spherical': 'true'}}]}
}
}
With the following postman request:
http://localhost:8090/data?aggregate={"$maxDist": 500, "$lng": -1.47, "$lat": 50.93}
This is leading to the following error:
geoNear command failed: { ok: 0.0, errmsg: "'near' field must be point", code: 17304, codeName: "Location17304" }

Categories

Resources