I have built a REST interface. On '400 Bad Request' it returns a json body with specific information about the error.
(Pdb) error.code
400
Python correctly throws a URLError with these headers
(Pdb) print(error.headers)
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sat, 20 Aug 2016 13:01:05 GMT
Connection: close
Content-Length: 236
There is a content of 236 char, but I cannot find a way to read the body.
I can see the extra information using DHC chrome plugin
{
"error_code": "00000001",
"error_message": "The json data is not in the correct json format.\r\nThe json data is not in the correct json format.\r\n'Execution Start Time' must not be empty.\r\n'Execution End Time' must not be empty.\r\n"
}
However, I cannot find a way in Python to read the body
Here are some of the things I have tried and what was returned.
(Pdb) len(error.read())
0
error.read().decode('utf-8', 'ignore')
''
(Pdb) error.readline()
b''
I found that this works the first time it is called, but does not work if called again.
error.read().decode('utf-8')
Related
I'm doing a FastAPI app, with a function that authenticates to a CouchDB instance. In order to request Couchdb, I use the (yet unmaintained) library python-couchdb.
Here is the relevant portion of code that illustrates my issue:
from fastapi import FastAPI
from couchdb import http
resource = http.Resource(url, http.Session())
from couchdb import Unauthorized
from pydantic import BaseModel
app = FastAPI()
class RegisteredUser(BaseModel):
email: str
password: str
#app.post("/login")
async def log_user(user: RegisteredUser):
# some email format verifications here
# ...
try:
status, headers, _ = resource.post_json('_session', {
'name': user.email,
'password': user.password,
})
except Exception as e:
if isinstance(e, Unauthorized):
return 403
else:
return 500
# tests
print(headers)
The headers look like:
Cache-Control: must-revalidate
Content-Length: 54
Content-Type: application/json
Date: Sat, 08 Aug 2020 19:19:49 GMT
Server: CouchDB/3.1.0 (Erlang OTP/22)
Set-Cookie: AuthSession=am9zZWJvdmVAam9zZWJvdmUuY29tOjVGMkVGQUQ2Op-UUD22VvdxYzbMNp92e30Er_z0; Version=1; Expires=Sat, 08-Aug-2020 20:59:50 GMT; Max-Age=6000; Path=/; HttpOnly
At this point (if no error raised), I'd like to send back to the (browser) client the cookie that CouchDb provides. Something like:
...
# tests
print(headers)
if status == 200 and 'Set-Cookie' in headers:
return JSONResponse(content=True, headers=headers)
else:
return status
I'm not used to sessions-cookie and I'm not sure if I should send back the full headers or just the headers['Set-Cookie'] part
Whatever I do, I end up with the same error message
RuntimeError: Response content shorter than Content-Length
Would you mine to explain to me what the error is saying, and how I can solve my case? Ty!
Found this SO thread but no clue neither FastAPI middleware peeking into responses
Here is one solution:
if status == 200 and 'Set-Cookie' in headers:
return JSONResponse(content=True, headers={
'Set-Cookie' : headers['Set-Cookie']
})
else:
return status
Ty #HernánAlarcón
I am querying the New Relic API, and trying to pull CPU utilization fron the metrics that they provide. When I run the following curl command (after exporting the correct proxy settings), I see the following information (which contains the percentage value that I want) -
curl -X GET "https://api.newrelic.com/v2/applications/140456413/hosts/21044947/metrics/data.json" -H "X-Api-Key:myapikey" -i -d 'names[]=CPU/User+Time&values[]=percent&summarize=true&from=2018-07-15 16:22:30&to=2018-07-16 16:22:30'
HTTP/1.0 200 Connection Established
Proxy-agent: Apache
HTTP/1.1 200 OK
Server: openresty
Date: Tue, 17 Jul 2018 11:53:22 GMT
Content-Type: application/json
Content-Length: 290
Connection: keep-alive
Status: 200 OK
X-UA-Compatible: IE=Edge,chrome=1
ETag: "a35b451e27a49d0d5f4e16715429a17d"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: f4f8675f095aba80d5e089fbcbf1b172
X-Runtime: 0.168283
X-Rack-Cache: miss
{"metric_data":{"from":"2018-07-15T16:22:30+00:00","to":"2018-07-16T16:22:30+00:00","metrics_not_found":[],"metrics_found":["CPU/User Time"],"metrics":[{"name":"CPU/User Time","timeslices":[{"from":"2018-07-15T16:22:00+00:00","to":"2018-07-16T16:22:00+00:00","values":{"percent":9.52}}]}]}}
However, when I try to implement it inside of the Python requests module, the "Percent" value that I am interested in seeing is returning 0. This is my code to call it -
options = {"names[]": "CPU/User+Time", "values[]": "percent", "summarize": "true", "from": str(end_date), "to": str(start_date - datetime.timedelta(hours=6))}
path = "applications/140456413/hosts/" + server_id + "/metrics/data.json"
api_key = "myapikey"
headers = {'X-Api-Key': api_key}
url = "https://api.newrelic.com/v2/" + path
r = requests.get(url, headers=headers, data=options, proxies=myproxy.proxies)
This is what I get instead (notice the percent value is now 0) -
{u'metric_data': {u'metrics': [{u'timeslices': [{u'values': {u'percent': 0}, u'to': u'2018-07-17T01:35:00+00:00', u'from': u'2018-07-16T07:35:00+00:00'}], u'name': u'CPU/User+Time'}], u'to': u'2018-07-17T01:35:59+00:00', u'metrics_found': [u'CPU/User+Time'], u'from': u'2018-07-16T07:35:59+00:00', u'metrics_not_found': []}}
How can I adjust the python request to match the same output as the curl command? We were originally passing in options inside of the "options" variable using ='s instead of key/value pairs, but the requests module would not process them in this format.
These were the docs pages I referencesd -
https://docs.newrelic.com/docs/apis/rest-api-v2/requirements/specifying-time-range-v2
https://docs.newrelic.com/docs/apis/rest-api-v2/application-examples-v2/get-average-cpu-usage-host-app
https://docs.newrelic.com/docs/apis/rest-api-v2/getting-started/introduction-new-relic-rest-api-v2#examples
Thanks.
Change the options a bit:
"values[]": "percent"
to:
"values": ["percent"]
I want to specify the HTTP response charset by modifying the Content-Type header. However, it doesn't work.
Here is a short example:
#coding=utf-8
import cherrypy
class Website:
#cherrypy.expose()
def index(self):
cherrypy.response.headers['Content-Type']='text/plain; charset=gbk'
return '。。。'.encode('gbk')
cherrypy.quickstart(Website(),'/',{
'/': {
'tools.response_headers.on':True,
}
})
And when I visit that page, the Content-Type is changed mysteriously to text/plain;charset=utf-8, causing mojibake in the browser.
C:\Users\Administrator>ncat 127.0.0.1 8080 -C
GET / HTTP/1.1
Host: 127.0.0.1:8080
HTTP/1.1 200 OK
Server: CherryPy/7.1.0
Content-Length: 6
Content-Type: text/plain;charset=utf-8
Date: Mon, 22 Aug 2016 01:08:13 GMT
。。。^C
It seems that CherryPy detect content encoding and override the charset automatically. If so, how can I disable this feature?
All right. Solved this problem by tampering cherrypy.response.header_list directly.
#coding=utf-8
import cherrypy
def set_content_type():
header=(b'Content-Type',cherrypy.response._content_type.encode())
for ind,(key,_) in enumerate(cherrypy.response.header_list):
if key.lower()==b'content-type':
cherrypy.response.header_list[ind]=header
break
else:
cherrypy.response.header_list.append(header)
cherrypy.tools.set_content_type=cherrypy.Tool('on_end_resource',set_content_type)
class Website:
#cherrypy.expose()
#cherrypy.tools.set_content_type()
def index(self):
cherrypy.response._content_type='text/plain; charset=gbk'
return '。。。'.encode('gbk')
cherrypy.quickstart(Website(),'/')
I had success to set the content-type-charset by setting/manipulate the request header attribute "Accept-Charset".
cherrypy.request.headers["Accept-Charset"] = "ISO-8859-1"
cherrypy.response.headers["Content-Type"] = "text/xml"
The result:
>curl -I https://127.0.0.1/url?param=value
HTTP/1.1 200 OK
Content-Type: text/xml;charset=ISO-8859-1
Server: CherryPy/18.6.0
Date: Mon, 10 Aug 2020 11:54:49 GMT
Content-Length: 288
Set-Cookie: session_id=d28fa46a1a3d901d9502038255ce869b21add5cc; expires=Mon, 10 Aug 2020 12:54:49 GMT; Max-Age=3600; Path=/
Iam trying to accsess OAuth2 in Python (the code is the same as http://code.google.com/p/google-api-ads-python/source/browse/trunk/examples/adspygoogle/adwords/v201302/misc/use_oauth2.py?spec=svn139&r=139):
flow = OAuth2WebServerFlow(client_id='XXX',
client_secret='YYY',
scope='https://adwords.google.com/api/adwords',
user_agent='ZZZ')
authorize_url = flow.step1_get_authorize_url('urn:ietf:wg:oauth:2.0:oob')
code = raw_input('Code: ').strip()
credential = None
try:
credential = flow.step2_exchange(code) #<- error
except FlowExchangeError, e:
sys.exit('Authentication has failed: %s' % e)
This produces a "socket.error: [Errno 10054]" error at the step2_exchange and Python cuts off the excact message.
So after checking the key with OAuthplayground (to get an better errormsg) i get this Error:
HTTP/1.1 400 Bad Request
Content-length: 37
X-xss-protection: 1; mode=block
X-content-type-options: nosniff
X-google-cache-control: remote-fetch
-content-encoding: gzip
Server: GSE
Via: HTTP/1.1 GWA
Pragma: no-cache
Cache-control: no-cache, no-store, max-age=0, must-revalidate
Date: Thu, 06 Jun 2013 13:54:29 GMT
X-frame-options: SAMEORIGIN
Content-type: application/json
Expires: Fri, 01 Jan 1990 00:00:00 GMT
{
"error" : "unauthorized_client"
}
I checked that client_id (for installed Apps) and client_secret are Identical with the one specified in the Google API Console (https://code.google.com/apis/console/).
If i do the whole proces over OAuthPlayground it will work but if i try to use a Token created by playground the App fails also.
Anyone knows how to fix it?
Fixed it. Iam behind a proxy which makes lets the step1 Auth through but apparently not the step2 auth. So a simple
h = httplib2.Http(proxy_info = httplib2.ProxyInfo PROXY DATA .....)
flow.step2_exchange(code, h)
fixed it.
There is an example of how to configure the proxy_info in httplib2 is in https://code.google.com/p/httplib2/wiki/Examples
which says:
import httplib2
import socks
httplib2.debuglevel=4
h = httplib2.Http(proxy_info = httplib2.ProxyInfo(socks.PROXY_TYPE_HTTP, 'localhost', 8000))
r,c = h.request("http://bitworking.org/news/")
However, I've found with the latest httplib2, a cleaned-up socks module comes with it, such that you really want to do something more like:
import httplib2
ht = httplib2.Http(proxy_info = httplib2.ProxyInfo(httplib2.socks.PROXY_TYPE_HTTP, 'name_or_ip_of_the_proxy_box', proxy_port)
flow.step2_exchange(code, ht)
also, you want to be using a version of oauth2client >= 1.0beta8 which requires the version of httplib2 >= 0.7.4 which is where the support for HTTP proxy was cleaned up in both packages.
I am calling the URL :
http://code.google.com/feeds/issues/p/chromium/issues/full/291?alt=json
using urllib2 and decoding using the json module
url = "http://code.google.com/feeds/issues/p/chromium/issues/full/291?alt=json"
request = urllib2.Request(query)
response = urllib2.urlopen(request)
issue_report = json.loads(response.read())
I run into the following error :
ValueError: Invalid control character at: line 1 column 1120 (char 1120)
I tried checking the header and I got the following :
Content-Type: application/json; charset=UTF-8
Access-Control-Allow-Origin: *
Expires: Sun, 03 Jul 2011 17:38:38 GMT
Date: Sun, 03 Jul 2011 17:38:38 GMT
Cache-Control: private, max-age=0, must-revalidate, no-transform
Vary: Accept, X-GData-Authorization, GData-Version
GData-Version: 1.0
ETag: W/"CUEGQX47eCl7ImA9WxJaFEw."
Last-Modified: Tue, 04 Aug 2009 19:20:20 GMT
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Server: GSE
Connection: close
I also tried adding an encoding parameter as follows :
issue_report = json.loads(response.read() , encoding = 'UTF-8')
I still run into the same error.
The feed has raw data from a JPEG in it at that point; the JSON is malformed, so it's not your fault. Report a bug to Google.
You could consider using lxml instead, since the JSON is malformed. It's XPath support makes working with XML pretty straight-forward:
import lxml.etree
url = 'http://code.google.com/feeds/issues/p/chromium/issues/full/291'
doc = lxml.etree.parse(url)
ns = {'issues': 'http://schemas.google.com/projecthosting/issues/2009'}
issues = doc.xpath('//issues:*', namespaces=ns)
Fairly easy to manipulate elements, for instance to strip namespace from tags, convert to dict:
>>> dict((x.tag[len(ns['issues'])+2:], x.text) for x in issues)
<<<
{'closedDate': '2009-08-04T19:20:20.000Z',
'id': '291',
'label': 'Area-BrowserUI',
'stars': '13',
'state': 'closed',
'status': 'Verified'}