I am running the following:
import geopy
geolocator = geopy.geocoders.OpenMapQuest(api_key='my_key_here')
location1 = geolocator.geocode('Madrid')
where my_key_here is my consumer key for mapquest, and I get the following error:
GeocoderInsufficientPrivileges: HTTP Error 403: Forbidden
Not sure what I am doing wrong.
Thanks!
I've also tried the same with the same result. After checking the Library, I found out, that the error is referring to the line, where the request ist build and it seems, that the API Key is not transmitted. If you add no key in the init statement, the api_key='' so I tried to change the line 66 in my own Library of the file: https://github.com/geopy/geopy/blob/master/geopy/geocoders/openmapquest.py to my key.
Still no success! The key itself works, I've tested it with calling the URL that is also called in the Library:
http://open.mapquestapi.com/nominatim/v1/search.php?key="MY_KEY"&format=json&json_callback=renderBasicSearchNarrative&q=westminster+abbey
no idea why this isn't working…
Cheers.kg
I made slight progress with fixing this one. I was able to get the query written correctly, but its the json parsing that kind of have me stumped. Maybe someone knows. I know the url is being sent correctly (I checked it in the browser and it returned a json object). Maybe someone knows how to parse the returned json object to get it to finally work.
Anyways, I had to go in the openmapquest.py source code, and starting from line 66, I made the following modifications:
self.api_key = api_key
self.api = "http://www.mapquestapi.com/geocoding/v1/address?"
def geocode(self, query, exactly_one=True, timeout=None): # pylint: disable=W0221
"""
Geocode a location query.
:param string query: The address or query you wish to geocode.
:param bool exactly_one: Return one result or a list of results, if
available.
:param int timeout: Time, in seconds, to wait for the geocoding service
to respond before raising a :class:`geopy.exc.GeocoderTimedOut`
exception. Set this only if you wish to override, on this call
only, the value set during the geocoder's initialization.
.. versionadded:: 0.97
"""
params = {
'key': self.api_key,
'location': self.format_string % query
}
if exactly_one:
params['maxResults'] = 1
url = "&".join((self.api, urlencode(params)))
print url # Print the URL just to make sure it's produced correctly
Now the task remains to get the _parse_json function working.
Related
I've recently started working with the Facebook Marketing API, using the facebook_business SDK for Python (running v3.9 on Ubuntu 20.04). I think I've mostly wrapped my head around how it works, however, I'm still kind of at a loss as to how I can handle the arbitrary way in which the API is rate-limited.
Specifically, what I'm attempting to do is to retrieve all Ad Sets from all the campaigns that have ever run on my ad account, regardless of whether their effective_status is ACTIVE, PAUSED, DELETED or ARCHIVED.
Hence, I pulled all the campaigns for my ad account. These are stored in a dict, whereby the key indicates the effective_status, like so, called output:
{'ACTIVE': ['******************',
'******************',
'******************'],
'PAUSED': ['******************',
'******************',
'******************'}
Then, I'm trying to pull the Ad Set ids, like so:
import pandas as pd
import json
import re
import time
from random import *
from facebook_business.api import FacebookAdsApi
from facebook_business.adobjects.adaccount import AdAccount # account-level info
from facebook_business.adobjects.campaign import Campaign # campaign-level info
from facebook_business.adobjects.adset import AdSet # ad-set level info
from facebook_business.adobjects.ad import Ad # ad-level info
# auth init
app_id = open(APP_ID_PATH, 'r').read().splitlines()[0]
app_secret = open(APP_SECRET_PATH, 'r').read().splitlines()[0]
token = open(APP_ACCESS_TOKEN, 'r').read().splitlines()[0]
# init the connection
FacebookAdsApi.init(app_id, app_secret, token)
campaign_types = list(output.keys())
ad_sets = {}
for status in campaign_types:
ad_sets_for_status = []
for campaign_id in output[status]:
# sleep and wait for a random time
sleepy_time = uniform(1, 3)
time.sleep(sleepy_time)
# pull the ad_sets for this particular campaign
campaign_ad_sets = Campaign(campaign_id).get_ad_sets()
for entry in campaign_ad_sets:
ad_sets_for_status.append(entry['id'])
ad_sets[status] = ad_sets_for_status
Now, this crashes at different times whenever I run it, with the following error:
FacebookRequestError:
Message: Call was not successful
Method: GET
Path: https://graph.facebook.com/v11.0/23846914220310083/adsets
Params: {'summary': 'true'}
Status: 400
Response:
{
"error": {
"message": "(#17) User request limit reached",
"type": "OAuthException",
"is_transient": true,
"code": 17,
"error_subcode": 2446079,
"fbtrace_id": "***************"
}
}
I can't reproduce the time at which it crashes, however, it certainly doesn't take ~600 calls (see here: https://stackoverflow.com/a/29690316/5080858), and as you can see, I'm sleeping ahead of every API call. You might suggest that I should just call the get_ad_sets method on the AdAccount endpoint, however, this pulls fewer ad sets than the above code does, even before it crashes. For my use-case, it's important to pull ads that are long over as well as ads that are ongoing, hence it's important that I get as much data as possible.
I'm kind of annoyed with this -- seeing as we are paying for these ads to run, you'd think FB would make it as easy as possible to retrieve info on them via API, and not introduce API rate limits similar to those for valuable data one doesn't necessarily own.
Anyway, I'd appreciate any kind of advice or insights - perhaps there's also a much better way of doing this that I haven't considered.
Many thanks in advance!
The error with 'code': 17 means that you reach the limit of call and in order to get more nodes you have to wait.
Firstly I would handle the error in this way:
from facebook_business.exceptions import FacebookRequestError
...
for status in campaign_types:
ad_sets_for_status = []
for campaign_id in output[status]:
# keep trying until the request is ok
while True:
try:
campaign_ad_sets = Campaign(campaign_id).get_ad_sets()
break
except FacebookRequestError as error:
if error.api_error_code() in [17, 80000]:
time.sleep(sleepy_time) # sleep for a period of time
for entry in campaign_ad_sets:
ad_sets_for_status.append(entry['id'])
ad_sets[status] = ad_sets_for_status
I'd like to suggest you moreover to fetch the list of nodes from the account (by using the 'level': node param in params) and by using the batch calls: I can assure you that this will help you a lot and it will decrease the program run time.
I hope I was helpful.
I am trying to access an API from this website. (https://www.eia.gov/opendata/qb.php?category=717234)
I am able to call the API but I am getting only headers. Not sure if I am doing correctly or any additions are needed.
Code:
import urllib
import requests
import urllib.request
locu_api = 'WebAPI'
def locu_search(query):
api_key = locu_api
url = 'https://api.eia.gov/category?api_key=' + api_key
locality = query.replace(' ', '%20')
response = urllib.request.urlopen(url).read()
json_obj = str(response, 'utf-8')
data = json.loads(json_obj)
When I try to print the results to see whats there in data:
data
I am getting only the headers in JSON output. Can any one help me figure out how to do extract the data instead of headers.
Avi!
Look, the data you posted seems to be an application/json response. I tried to reorganize your snippet a little bit so you could reuse it for other purposes later.
import requests
API_KEY = "insert_it_here"
def get_categories_data(api_key, category_id):
"""
Makes a request to gov API and returns its JSON response
as a python dict.
"""
host = "https://api.eia.gov/"
endpoint = "category"
url = f"{host}/{endpoint}"
qry_string_params = {"api_key": api_key, "category_id": category_id}
response = requests.post(url, params=qry_string_params)
return response.json()
print(get_categories_data(api_key=API_KEY, category_id="717234"))
As far as I can tell, the response contains some categories and their names. If that's not what you were expecting, maybe there's another endpoint that you should look for. I'm sure this snippet can help you if that's the case.
Side note: isn't your API key supposed to be private? Not sure if you should share that.
Update:
Thanks to Brad Solomon, I've changed the snippet to pass query string arguments to the requests.post function by using the params parameter which will take care of the URL encoding, if necessary.
You haven't presented all of the data. But what I see here is first a dict that associates category_id (a number) with a variable name. For example category_id 717252 is associated with variable name 'Import quantity'. Next I see a dict that associates category_id with a description, but you haven't presented the whole of that dict so 717252 does not appear. And after that I would expect to see a third dict, here entirely missing, associating a category_id with a value, something like {'category_id': 717252, 'value': 123.456}.
I think you are just unaccustomed to the way some APIs aggressively decompose their data into key/value pairs. Look more closely at the data. Can't help any further without being able to see the data for myself.
I have a DataFrame common_ips containing IPs as shown below.
I need to achieve two basic tasks:
Identify private and public IPs.
Check organisation for public IPs.
Here is what I am doing:
import json
import urllib
import re
baseurl = 'http://ipinfo.io/' # no HTTPS supported (at least: not without a plan)
def isIPpublic(ipaddress):
return not isIPprivate(ipaddress)
def isIPprivate(ipaddress):
if ipaddress.startswith("::ffff:"):
ipaddress=ipaddress.replace("::ffff:", "")
# IPv4 Regexp from https://stackoverflow.com/questions/30674845/
if re.search(r"^(?:10|127|172\.(?:1[6-9]|2[0-9]|3[01])|192\.168)\..*", ipaddress):
# Yes, so match, so a local or RFC1918 IPv4 address
return True
if ipaddress == "::1":
# Yes, IPv6 localhost
return True
return False
def getipInfo(ipaddress):
url = '%s%s/json' % (baseurl, ipaddress)
try:
urlresult = urllib.request.urlopen(url)
jsonresult = urlresult.read() # get the JSON
parsedjson = json.loads(jsonresult) # put parsed JSON into dictionary
return parsedjson
except:
return None
def checkIP(ipaddress):
if (isIPpublic(ipaddress)):
if bool(getipInfo(ipaddress)):
if 'bogon' in getipInfo(ipaddress).keys():
return 'Private IP'
elif bool(getipInfo(ipaddress).get('org')):
return getipInfo(ipaddress)['org']
else:
return 'No organization data'
else:
return 'No data available'
else:
return 'Private IP'
And applying it to my common_ips DataFrame with
common_ips['Info'] = common_ips.IP.apply(checkIP)
But it's taking longer than I expected. And for some IPs, it's giving incorrect Info.
For instance:
where it should have been AS19902 Department of Administrative Services as I cross-checked it by
and
What am I missing here ? And how can I achieve these tasks in a more Pythonic way ?
A blanket except: is basically always a bug. You are returning None instead of handling any anomalous or error response from the server, and of course the rest of your code has no way to recover.
As a first debugging step, simply take out the try/except handling. Maybe then you can find a way to put back a somewhat more detailed error handler for some cases which you know how to recover from.
def getipInfo(ipaddress):
url = '%s%s/json' % (baseurl, ipaddress)
urlresult = urllib.request.urlopen(url)
jsonresult = urlresult.read() # get the JSON
parsedjson = json.loads(jsonresult) # put parsed JSON into dictionary
return parsedjson
Perhaps the calling code in checkIP should have a try/except instead, and e.g. retry after sleeping for a bit if the server indicates that you are going too fast.
(In the absence of an authorization token, it looks like you are using the free version of this service, which is probably not in any way guaranteed anyway. Also maybe look at using their recommended library -- I haven't looked at it in more detail, but I would imagine it at the very least knows better how to behave in the case of a server-side error. It's almost certainly also more Pythonic, at least in the sense that you should not reinvent things which already exist.)
I am learning how web apps work and after successfully creating connection between front and back end I managed to perform get request with axiom:
Route in my Flask
#app.route('/api/random')
def random_number():
k = kokos()
print(k)
response = {'randomNumber': k}
return jsonify(response)
my kokos() function
def kokos():
return (890)
Function that I call to get data from backend:
getRandomFromBackend () {
const path = `http://localhost:5000/api/random`
axios.get(path)
.then(response => {this.randomNumber = response.data.randomNumber})
.catch(error => {
console.log(error)
})
}
Now suppose I have an input field in my App with value that I want to use in the function kokos() to affect the result and what is going to be displayed in my app.. Can someone explain me how to do that?
Is this what POST requests are for and I have to post first and then get? Or can I use still GET and somehow pass "arguments"? Is this even GET and POST are for or am I making it too complicated for myself?
Is this the proper way to do these kind of thing? I just have a lot of code in python already written and want to simply exchange data between server and client.
Thank you, Jakub
You can add second argument
axios.get(path, {
params: {
id: 122
}
})
.then ...
You can pass id like this or anything it will be available in get params at python side like we pass in URL.
python side [Flask] (http://flask.pocoo.org/docs/1.0/quickstart/#accessing-request-data)
To access parameters submitted in the URL (?key=value) you can use the args attribute:
def random_number():
id = request.args.get('id', '')
k = kokos(id)
id will be passed kokos function if no id is provided it will be blank ''
you can read axios docu to make complex requests.
https://github.com/axios/axios
if any doubt please comment.
I have server-side code that calls the Google geocoding API, like this:
http://maps.google.com/maps/geo?q=40.714224,-73.961452&output=json&sensor=false&key=API_KEY
where API_KEY is my API key. I get a JSON reply, as expected, but the reponse is always 602 (Unknown Address). Is my URL wrong? (I've also tried the URL in the Google docs, but that returns a status: 'REQUEST_DENIED'.
What else could be wrong?
Update:
Well, it seems to actually be a mistake in my implementation, not the URL. This was how I did it:
api_params = {
'q': '40.714224,-73.961452',
'sensor': 'false',
'key': KEY,
'output': 'json'
}
# make the api call
http_response = urllib.urlopen('http://maps.google.com/maps/geo',
urllib.urlencode(api_params))
r = json.load(http_response)
but changing it to:
api_params = {
#'q': str(lat) + ',' + str(lng),
'q': '40.714224,-73.961452',
'sensor': 'false',
'key': KEY,
'output': 'json'
}
# make the api call
http_response = urllib.urlopen('http://maps.google.com/maps/geo?q='+api_params['q']+'&output=json&sensor=false&key='+api_params['key'])
r = json.load(http_response)
print r
fixes the problem. So my new question is, what's wrong with the first one?
The first one executes POST request, the second - GET request.
You may also want to use the urllib.urlencode function for concatenation.
But the easiest way is to use geopy.
Try using a HTTP watcher to make sure that this is the actual URL that is being sent within your application. There could be a chance that it isn't being encoded correctly or maybe is being incorrectly assembled. Since you aren't getting request denied and we were able to get a good response when we viewed it directly it seems that could be the best place to start. Hope that helps!