fitbit API HTTPS error - python

I'm trying to get my heart rate and sleep data through fitbit API, i'm using this:
https://github.com/orcasgit/python-fitbit
in order to connect to the server and get the access and refresh tokens (i use gather_kays_oauth2 to get the tokens).
And when i'm conecting in HTTP I do manage to get the sleep data, but when i'm trying to get the HR like that:
client.time_series("https://api.fitbit.com/1/user/-/activities/heart/date/today/1d.json", period="1d")
I get this error:
HTTPBadRequest: this request must use the HTTPS protocol
And for some reason i can't connect in HTTPS - when i do try it, the browser pops up an ERR_SSL_PROTOCOL_ERROR even before the FITBIT Authorization Page.
i tried to follow and fix any settings that may cause the browser to fail, but they're all good and the error still pops up.
I've tried to change the callback URL, i searched for other fitbit OAUTH2 connection guides, but i only manage to connect in HTTP and not HTTPS
Does anyone knows how to solve it?

Your code should be client.time_series('activities/heart', period='1d') to get heart rate.
For the first parameter resource, it doesn't need the resource URL, but it asks you to put one of these: activities, body, foods, heart, sleep.
Here is the link of source code from python-fitbit:
http://python-fitbit.readthedocs.io/en/latest/_modules/fitbit/api.html#Fitbit.time_series
Added:
If you want to get the full heart rate data per minute (["activities-heart-intraday"] dataset), try client.intraday_time_series('activities/heart'). It will return data with the one-minute/one-second detail.

Ok I've worked out the HTTPS issue in relation to my need. It was because I sent a request to.
https://api.fitbit.com//1/user/-/activities/recent.json
I removed the additional forward slash after .com and it worked
https://api.fitbit.com/1/user/-/activities/recent.json
However, this is not the same issue you had which returned the same message for me this request must use the HTTPS protocol.
Which would suggest that any unhandled errors due to malformed requests to Fitbit return this same error. Rather than one that gives you a little more clue as to what just happened.

Related

Control API: Service unavailable (503)

Good morning,
I want to query households (my first query and generally first experience with the Sonos API) and have authenticated successfully. I got an access token and query the Control API like this:
headers={"Content-Type" : "application/json",
"Authorization" : "Bearer " + token["access_token"]}
resp = re.get('http://api.ws.sonos.com/control/api/v1/househoulds', headers=headers)
It returns me a response with error code "503: Service unavailable":
Service Unavailable
Service Unavailable - Zero size object
The server is temporarily unable to service your request. Please try again
later.
Reference XXXXX
(I cut out the reference because I am not sure, if it contains credentials). I remember that when I intentionally changed my access token to a wrong one yesterday, I would get an error code back that I am not authorized. But now when I change it to a false one I still just get this same page back (503: Service unavailable).
Does anyone have the same problem? Might it be some security mechanism because I authorized many times in a short time or is the control API just currently down? I tried yesterday and today and don't see a blog post stating a downtime.
I see two issues with the code snippet you provided:
Issue 1: Your API URL has a typo. You used "househoulds" instead of
"households".
Issue 2: Your URL needs to use https://, not http://
If you fix those two issues and are indeed using a valid access token, your request should work.

Soundcloud API is returning 403 on some tracks

Soundclouds API is returning 403 on some tracks for me. I have tried playing with the raw http endpoints and also the soundcloud api wrapper for python, both have the issue.
https://api.soundcloud.com/tracks/251164884.json?client_id=CLIENT_ID
The above one returns a 403 error while below one works, using same CLIENT_ID obviously
https://api.soundcloud.com/tracks/197355235.json?client_id=CLIENT_ID
Using the library wrapper I get. requests.exceptions.HTTPError: 403 Client Error: Forbidden
import soundcloud
client = soundcloud.Client(client_id=CLIENT_ID)
track = client.get('/resolve', url='https://soundcloud.com/mtarecords/my-nu-leng-flava-d-soul-shake')
https://soundcloud.com/calyxteebee/nothing-left
Another track that also doesn't resolve. Not all tracks have this issue, most work how they always have.
If you go to the Share -> Embed on Soundcloud the track_id will be in there, so I know I am using the correct track_id.
Viewing the http endpoints in browser I get the error.
Failed to load resource: the server responded with a status of 401 (Unauthorized) - https://api.soundcloud.com/favicon.ico
Anyone else run into this issue before?
Using your two examples I get valid results for both
Example 1:
https://api.soundcloud.com/resolve?url=https://soundcloud.com/calyxteebee/nothing-left&client_id=CLIENT_ID
returns
https://api.soundcloud.com/tracks/251164884?client_id=CLIENT_ID
Example 2:
https://api.soundcloud.com/resolve?url=https://soundcloud.com/mtarecords/my-nu-leng-flava-d-soul-shake&client_id=CLIENT_ID
returns
https://api.soundcloud.com/tracks/249638630?client_id=CLIENT_ID
using this url, working perfectly sir. Try this. :D
https://api.soundcloud.com/tracks/TRACK_ID/stream?client_id=CLIENT_ID
I have been investigating this issue for some time now, and I discovered something which at least solves my situation, dunno if it will solve yours.
The Revelation:
If you do a head request with curl (-I option) then it seems to always return with a 200/300 range response.
Why it works: I am streaming Soundcloud tracks with URLs like https://api.soundcloud.com/tracks/TRACK_ID/stream?client_id=CLIENT_ID in an iOS app using FreeStreamer. The stream was failing on exactly those tracks for which curl -v returned 403 for the tracks URL (it returns 401 for the stream URL). So to solve my situation, I perform a head request which gives 302 Found, extract the mp3 URL, and I use that to stream instead of the original URL.
I consider this a bug of the library (since it should be able to handle any 302) and I created an issue for it.

Google crawl 503 service unavailable

I have got a very strange problem when I crawl google search engine with wget, curl or python on my servers. Google redirects me to an address starting with [ipv4|ipv6].google.fr/sorry/IndexRedirect... and finally send a 503 error, service unavailable...
Sometimes crawl works correctly and sometimes not during the day, and I tried almost everything possible : forcing ipv4/ipv6 instead of hostname, referer, user agent, vpn, .com/.fr/, proxies and tor, ...
I guess this is an error from Google Servers... any idea ? thanks !
wget "http://google.fr/search?q=test"
--2015-06-03 10:19:52-- http://google.fr/search?q=test
Resolving google.fr (google.fr)... 2a00:1450:400c:c05::5e, 173.194.67.94
Connecting to google.fr (google.fr)|2a00:1450:400c:c05::5e|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://ipv6.google.com/sorry/IndexRedirect?continue=http://google.fr/search%3Fq%3Dtest&q=CGMSECABQdAAUQABAAAAAAAAH1QYqPG6qwUiGQDxp4NLQuHgP_i-oiUu0ZShPumAZRF3u_0 [following]
--2015-06-03 10:19:53-- http://ipv6.google.com/sorry/IndexRedirect?continue=http://google.fr/search%3Fq%3Dtest&q=CGMSECABQdAAUQABAAAAAAAAH1QYqPG6qwUiGQDxp4NLQuHgP_i-oiUu0ZShPumAZRF3u_0
Resolving ipv6.google.com (ipv6.google.com)... 2a00:1450:400c:c05::64
Connecting to ipv6.google.com (ipv6.google.com)|2a00:1450:400c:c05::64|:80... connected.
HTTP request sent, awaiting response... 503 Service Unavailable
2015-06-03 10:19:53 ERROR 503: Service Unavailable.
Google have triggers to sniff out bots and other abuse of their Terms of Service, so they set a limit (or a "throttle") on the number of calls that the same i.p. address can make over a certain period of time. I believe it's something like 10 calls per minute. Case in point: If you paste your Url into a browser when it fails with a 503 error, you'll get a Captcha challenge from Google to prove you are not a bot.
I am using the pattern.web module to do essentially the same thing as you are doing (for harmless research purposes, of course!), and the documentation for that library shows the throttling limits for most popular APIs (Google, Bing, Twitter, Facebook...).
Try sending your requests every 15+ seconds or so, to avoid tripping up the throttle limit.

Odd redirect location causes proxy error with urllib2

I am using urllib2 to do an http post request using Python 2.7.3. My request is returning an HTTPError exception (HTTP Error 502: Proxy Error).
Looking at the messages traffic with Charles, I see the following is happening:
I send the HTTP request (POST /index.asp?action=login HTTP/1.1) using urllib2
The remote server replies with status 303 and a location header of ../index.asp?action=news
urllib2 retries sending a get request: (GET /../index.asp?action=news HTTP/1.1)
The remote server replies with status 502 (Proxy error)
The 502 reply includes this in the response body: "DNS lookup failure for: 10.0.0.30:80index.asp" (Notice the malformed URL)
So I take this to mean that a proxy server on the remote server's network sees the "/../index.asp" URL in the request and misinterprets it, sending my request on with a bad URL.
When I make the same request with my browser (Chrome), the retry is sent to GET /index.asp?action=news. So Chrome takes off the leading "/.." from the URL, and the remote server replies with a valid response.
Is this a urllib2 bug? Is there something I can do so the retry ignores the "/.." in the URL? Or is there some other way to solve this problem? Thinking it might be a urllib2 bug, I swapped out urllib2 with requests but requests produced the same result. Of course, that may be because requests is built on urllib2.
Thanks for any help.
The Location being sent with that 302 is wrong in multiple ways.
First, if you read RFC2616 (HTTP/1.1 Header Field Definitions) 14.30 Location, the Location must be an absoluteURI, not a relative one. And section 10.3.3 makes it clear that this is the relevant definition.
Second, even if a relative URI were allowed, RFC 1808, Relative Uniform Resource Locators, 4. Resolving Relative URLs, step 6, only specifies special handling for .. in the pattern <segment>/../. That means that a relative URL shouldn't start with ... So, even if the base URL is http://example.com/foo/bar/ and the relative URL is ../baz/, the resolved URL is not http://example.com/foo/baz/, but http://example.com/foo/bar/../baz. (Of course most servers will treat these the same way, but that's up to each server.)
Finally, even if you did combine the relative and base URLs before resolving .., an absolute URI with a path starting with .. is invalid.
So, the bug is in the server's configuration.
Now, it just so happens that many user-agents will work around this bug. In particular, they turn /../foo into /foo to block users (or arbitrary JS running on their behalf without their knowledge) from trying to do "escape from webroot" attacks.
But that doesn't mean that urllib2 should do so, or that it's buggy for not doing so. Of course urllib2 should detect the error earlier so it can tell you "invalid path" or something, instead of running together an illegal absolute URI that's going to confuse the server into sending you back nonsense errors. But it is right to fail.
It's all well and good to say that the server configuration is wrong, but unless you're the one in charge of the server, you'll probably face an uphill battle trying to convince them that their site is broken and needs to be fixed when it works with every web browser they care about. Which means you may need to write your own workaround to deal with their site.
The way to do that with urllib2 is to supply your own HTTPRedirectHandler with an implementation of redirect_request method that recognizes this case and returns a different Request than the default code would (in particular, http://example.com/index.asp?action=news instead of http://example.com/../index.asp?action=news).

Python urllib.urlopen() call doesn't work with a URL that a browser accepts

If I point Firefox at http://bitbucket.org/tortoisehg/stable/wiki/Home/ReleaseNotes, I get a page of HTML. But if I try this in Python:
import urllib
site = 'http://bitbucket.org/tortoisehg/stable/wiki/Home/ReleaseNotes'
req = urllib.urlopen(site)
text = req.read()
I get the following:
500 Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
What am I doing wrong?
You are not doing anything wrong, bitbucket does some user agent detection (to detect mercurial clients for example). Just changing the user agent fixes it (if it doesn't have urllib as a substring).
You should fill an issue regarding this: http://bitbucket.org/jespern/bitbucket/issues/new/
You're doing nothing wrong, on the surface, and as the error page says you should contact the site's administrators because they're the ones with the server logs which may explain what's happening. Fortunately, bitbucket's site admins are a friendly bunch!
No doubt there is some header or combination of headers that browsers set one way, urllib sets another way, and a bug on the server gets tickled in the latter case. You may want to see exactly what headers are being sent e.g. with firebug in firefox, and reproduce those until you isolate exactly the server bug; most likely it's going to be the user agent or some "accept"-ish header that's tickling that bug.
I don't think you're doing anything wrong -- it looks like this server was just down? Your script worked fine for me ('text' contained the same data as that displayed in the browser).

Categories

Resources