I am running a server with BaseHttp with python.
I receive a request from a client which is based on HTTP/ 1.1
However, when I am answering the client back with my response, the client refuses to accept my response.
On further analysis I saw that the HTTP Version I am sending is HTTP/1.0.
However , I dont know how is it set.
The error at the client side is.
Original message: not well-formed (invalid token): line 2, column 4
Response
HTTP/1.0 200 OK
Server: BaseHTTP/0.3 Python/2.7.5
Date: Wed, 30 Jul 2014 15:11:42 GMT
Content-type: application/soap+xml; charset=utf-8
Content-length: 823
I am setting the header in the following way:
self.send_response(200)
self.send_header("Content-type", "application/soap+xml; charset=utf-8")
self.send_header("Content-length", content_length)
self.end_headers()
Set the protocol_version attribute on your handler class:
handler.protocol_version = 'HTTP/1.1'
This requires that you set a Content-Length header, which you already do.
Related
Hi i am trying to access one rest api with can be accessed only after login. I was using below code but getting 401, access denied. I am sure if same cookies will be applied to next put call, it will not give access denied. but python session is not using the same cookies.. instead adding new cookies..thanks..
with requests.Session() as s:
logging.info("Trying to login")
response1 = s.post("https://localhost:8080/api/authentication?j_username=admin&j_password=admin", verify=False)
for cookie in s.cookies:
logging.info(str(cookie.name) + " : " + str(cookie.value))
logging.info("logged in successfully " + str(response1.status_code))
url = url1 % (params['key'])
logging.info("inspector profile inpect api : " + url)
response = s.put(url, verify=False)
for cookie in s.cookies:
logging.info(str(cookie.name) + " :: " + str(cookie.value))
logging.info("code:-->"+ str(response.status_code))
Output is
CSRF-TOKEN : c3ea875b-3df9-4bd4-992e-2b976c150ea6
JSESSIONID : M3WWdp0PO95ENQSJciqiEbiHZR6ge7O8HkKDkY6R
logged in successfully 200
profile api : localhost:8080/api/test/283
CSRF-TOKEN :--> e5b64a66-5402-430b-8f51-d8d7549fd84e
JSESSIONID :--> JUZBHKmqsitvlrPvWuaqfTJNH1PIJcEXPTkPYPKk
CSRF-TOKEN :--> c3ea875b-3df9-4bd4-992e-2b976c150ea6
JSESSIONID :--> M3WWdp0PO95ENQSJciqiEbiHZR6ge7O8HkKDkY6R
code:401
Looks like next api call is not using the cookies, please help me out.
Just finished debugging the same issue.
By RFC 2965:
The term effective host name is related to host name. If a host name
contains no dots, the effective host name is that name with the
string .local appended to it. Otherwise the effective host name is
the same as the host name. Note that all effective host names
contain at least one dot.
Python Requests module uses http.cookiejar module to handle the cookies. It verifies the received cookies before applying them to a session.
Use the following code to get debug output:
import logging
import http.cookiejar
logging.basicConfig(level=logging.DEBUG)
http.cookiejar.debug = True
Here is an example, when received cookie is not applied:
DEBUG:http.cookiejar:add_cookie_header
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost
DEBUG:urllib3.connectionpool:http://localhost:80 "POST /api/login HTTP/1.1" 200 6157
DEBUG:http.cookiejar:extract_cookies: Date: Thu, 30 Apr 2020 15:45:11 GMT
Server: Werkzeug/0.14.1 Python/3.5.3
Content-Type: application/json
Content-Length: 6157
Set-Cookie: token=1234; Domain=localhost; Path=/
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
DEBUG:http.cookiejar: - checking cookie token=1234
DEBUG:http.cookiejar: non-local domain .localhost contains no embedded dot
Requests sent to localhost, expect web server to set domain part of a cookie to localhost.local
Here is an example, when received cookie was applied correctly:
DEBUG:http.cookiejar:add_cookie_header
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost
DEBUG:urllib3.connectionpool:http://localhost:80 "POST /api/login HTTP/1.1" 200 6157
DEBUG:http.cookiejar:extract_cookies: Date: Thu, 30 Apr 2020 15:52:08 GMT
Server: Werkzeug/0.14.1 Python/3.5.3
Content-Type: application/json
Content-Length: 6157
Set-Cookie: token=1234; Domain=localhost.local; Path=/
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
DEBUG:http.cookiejar: - checking cookie token=1234
DEBUG:http.cookiejar: setting cookie: <Cookie token=1234 for .localhost.local/>
If you cannot fix the web server, use 127.0.0.1 instead of localhost in your request:
response1 = s.post("https://127.0.0.1:8080/api/authentication?j_username=admin&j_password=admin", verify=False)
This code worked for me:
from requests import Session
s = Session()
s.auth = ('username', 'password')
s.get('http://host'+'/login/page/')
response = s.get('http://host'+'/login-required-pages/')
You did not actually authenticate successfully to the website despite having CSRF-TOKEN and JSESSIONID cookies. The session data, including whether or not you're authenticated, are stored on the server side, and those cookies you're getting are only keys to such session data.
One problem I see with the way you're authenticating is that you're posting username and password as query string, which is usually only for GET requests.
Try posting with proper payload instead:
response1 = s.post("https://localhost:8080/api/authentication", data={'j_username': 'admin', 'j_password': 'admin'}, verify=False)
I'm getting an error when receiving a multipart response.
WARNING connectionpool Failed to parse headers (url=************): [StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()], unparsed data: ''
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 399, in _make_request
assert_header_parsing(httplib_response.msg)
File "/usr/local/lib/python3.6/site-packages/urllib3/util/response.py", line 66, in assert_header_parsing
raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)
urllib3.exceptions.HeaderParsingError: [StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()], unparsed data: ''
Does this mean that the library does not support multipart responses? The response from my server works in all other cases including to the browser so I'm a little confused.
Any ideas?
This is what is coming back from the server (of course body truncated for brevity):
HTTP/1.1 200 OK
X-Powered-By: Servlet/3.1
X-CA-Affinity: 2411441258
Cache-Control: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Encoding: gzip
X-Compressed-By: BICompressionFilter
Content-Type: multipart/related; type="text/xml"; boundary="1521336443366.-7832488688540884419.-1425166373"
Content-Language: en-US
Transfer-Encoding: chunked
Date: Sun, 18 Mar 2018 01:27:23 GMT
a
154e
<i ʲ O x\龅L dre Qyi
/su k
Of course this is encoded. If I decode it in Fiddler this is what it looks like:
HTTP/1.1 200 OK
X-Powered-By: Servlet/3.1
X-CA-Affinity: 2411441258
Cache-Control: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
X-Compressed-By: BICompressionFilter
Content-Type: multipart/related; type="text/xml"; boundary="1521336443366.-7832488688540884419.-1425166373"
Content-Language: en-US
Date: Sun, 18 Mar 2018 01:27:23 GMT
Content-Length: 17419
--1521336443366.-7832488688540884419.-1425166373
Content-Type: text/xml; charset=utf-8
Content-Length: 15261
<?xml version="1.0" encoding="UTF-8"?>
To answer your question: Yes, Requests handles multipart requests just fine. Having said that, I have seen the same error you're getting.
This appears to be a bug within urllib3 but possibly goes as deep as the httplib package that comes with python. In your case I would guess it comes back to the UTF-8 encoding of the response which obviously you can't do much about (unless you also maintain server-side). I believe it is perfectly safe to ignore but simply including urllib3.disable_warnings() doesn't seem to do the trick for me. If you want to silence this specific warning, you can include a logging filter in your code. (credit to the home-assistant maintainers for this approach)
def filter_urllib3_logging():
"""Filter header errors from urllib3 due to a urllib3 bug."""
urllib3_logger = logging.getLogger("urllib3.connectionpool")
if not any(isinstance(x, NoHeaderErrorFilter)
for x in urllib3_logger.filters):
urllib3_logger.addFilter(
NoHeaderErrorFilter()
)
class NoHeaderErrorFilter(logging.Filter):
"""Filter out urllib3 Header Parsing Errors due to a urllib3 bug."""
def filter(self, record):
"""Filter out Header Parsing Errors."""
return "Failed to parse headers" not in record.getMessage()
Then, just call filter_urllib3_logging() in your setup. It doesn't stop the warnings but it DOES hide them :D
!!PLEASE NOTE!! This will also hide, and thus, make it difficult to diagnose any error that is caused by parsing headers which occasionally could be a legitimate error!
I am using this code for logging in to my app. It works fine, but when I try to get the url for profile pic
pic = facebook.get("/me/picture?fields=url")
I get None in response.
TypeError: must be string or buffer, not None
If I try sending this GET request from facebook Graph API, with same access-token, I get the profile link.
I checked out the version message by printing out get("/me") request, I get
connection: keep-alive
etag: "2c59a2baef08156f18055c64eaa9d9822e35e8f1"
pragma: no-cache
cache-control: private, no-cache, no-store, must-revalidate
date: Fri, 23 Jun 2017 14:20:04 GMT
**facebook-api-version: v2.9**
access-control-allow-origin: *
content-type: text/javascript; charset=UTF-8
Which shows that version is automatically converting when i send request from flask-OAuth as well. Then what is it that I am missing ?
As documented in the Graph API, "me/picture?" will return a 302 redirect to the picture image. To get access to the data about the picture, we need to include redirect=false in the end of query. So I get the url by;
pic = facebook.get('/me/picture?redirect=false').data
print 'picture:', pic['data']["url"]
I have set up simple python script that responds with hello:
def main(environ, start_response):
start_response('200 OK', [
('Content-type', 'text/plain')
])
return 'hello'
Everything works fine in Chrome, I can refresh page every second
But in Firefox I receive 'pending' status and eventually after veeeeeery long time Firefox shows the respond message.
What's wrong here? I tried with Content-Length but it didn't help
Here are the responses:
No, tested with two different Firefox on separate machines.
Firefox:
Status=OK - 200
Server=nginx
Date=Thu, 24 Jan 2013 14:28:31 GMT
Content-Type=text/plain
Transfer-Encoding=chunked
Connection=keep-alive
Content-Encoding=gzip
Chrome:
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 24 Jan 2013 20:24:06 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Content-Encoding: gzip
I am trying to use the YouTube services with OAuth. I have been able to obtain request tokens, authorize them and transform them into access tokens.
Now I am trying to use those tokens to actually do requests to the YouTube services. For instance I am trying to add a video to a playlist. Hence I am making a POST request to
https://gdata.youtube.com/feeds/api/playlists/XXXXXXXXXXXX
sending a body of
<?xml version="1.0" encoding="UTF-8"?>
<entry xmlns="http://www.w3.org/2005/Atom"
xmlns:yt="http://gdata.youtube.com/schemas/2007">
<id>XXXXXXXXX</id>
</entry>
and with the headers
Gdata-version: 2
Content-type: application/atom+xml
Authorization: OAuth oauth_consumer_key="www.xxxxx.xx",
oauth_nonce="xxxxxxxxxxxxxxxxxxxxxxxxx",
oauth_signature="XXXXXXXXXXXXXXXXXXX",
oauth_signature_method="HMAC-SHA1",
oauth_timestamp="1310985770",
oauth_token="1%2FXXXXXXXXXXXXXXXXXXXX",
oauth_version="1.0"
X-gdata-key: key="XXXXXXXXXXXXXXXXXXXXXXXXX"
plus some standard headers (Host and Content-Length) which are added by urllib2 (I am using Python) at the moment of the request.
Unfortunately, I get an Error 401: Unknown authorization header, and the headers of the response are
X-GData-User-Country: IT
WWW-Authenticate: GoogleLogin service="youtube",realm="https://www.google.com/youtube/accounts/ClientLogin"
Content-Type: text/html; charset=UTF-8
Content-Length: 179
Date: Mon, 18 Jul 2011 10:42:50 GMT
Expires: Mon, 18 Jul 2011 10:42:50 GMT
Cache-Control: private, max-age=0
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Server: GSE
Connection: close
In particular I do not know how to interpret the WWW-Authenticate header, whose realm hints to ClientLogin.
I have also tried to play with the OAuth Playground and the Authorization header sent by that site looks exactly like mine, except for the order of the fields. Still, on the plyaground everything works. Well, almost: I get an error telling that a Developer key is missing, but that is reasonable since there is no way to add one on the playground. Still, I go past the Error 401.
I have also tried to manually copy the Authorization header from there, and I got an Error 400: Bad request.
What am I doing wrong?
Turns out the problem was the newline before xmlns:yt. I was able to debug this using ncat, as suggeested here, and inspecting the full response.
i would suggest using the oauth python module, because it much more simple and takes care of the auth headers :) https://github.com/simplegeo/python-oauth2, as a solution i suggest you encode your parameters with 'utf-8' , i had a similar problem, and the solution was that google was expecting utf-8 encoded strings