Compatibility issue (python 2 vs python 3) - python

I have a program that will auto create a Github issue via the API. It works in Python 2.7, but when I run it with Python 3 I get the following error:
Traceback (most recent call last):
File "/home/baal/bin/python/zeus-scanner/var/auto_issue/github.py", line 92, in request_issue_creation
urllib2.urlopen(req, timeout=10).read()
File "/usr/lib/python3.5/urllib/request.py", line 163, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.5/urllib/request.py", line 464, in open
req = meth(req)
File "/usr/lib/python3.5/urllib/request.py", line 1183, in do_request_
raise TypeError(msg)
TypeError: POST data should be bytes or an iterable of bytes. It cannot be of type str.
I have the following method that creates a github issue (successful in python 2 unsuccessful in python 3):
def request_issue_creation():
logger.info(set_color(
"Zeus got an unexpected error and will automatically create an issue for this error, please wait..."
))
def __extract_stacktrace(file_data):
logger.info(set_color(
"extracting traceback from log file..."
))
retval, buff_mode, _buffer = [], False, ""
with open(file_data, "r+") as log:
for line in log:
if "Traceback" in line:
buff_mode = True
if line and len(line) < 5:
buff_mode = False
retval.append(_buffer)
_buffer = ""
if buff_mode:
if len(line) > 400:
line = line[:400] + "...\n"
_buffer += line
return "".join(retval)
logger.info(set_color(
"getting authorization..."
))
encoded = __get_encoded_string()
n = get_decode_num(encoded)
token = decode(n, encoded)
current_log_file = get_latest_log_file(CURRENT_LOG_FILE_PATH)
stacktrace = __extract_stacktrace(current_log_file)
issue_title = stacktrace.split("\n")[-2]
issue_data = {
"title": issue_title,
"body": "Error info:\n```{}````\n\n"
"Running details:\n`{}`\n\n"
"Commands used:\n`{}`\n\n"
"Log file info:\n```{}```".format(
str(stacktrace),
str(platform.platform()),
" ".join(sys.argv),
open(current_log_file).read()
),
}
try:
req = urllib2.Request(
url="https://api.github.com/repos/<API-REPO>/issues", data=json.dumps(issue_data),
headers={"Authorization": "token {}".format(token)}
)
urllib2.urlopen(req, timeout=10).read()
logger.info(set_color(
"issue has been created successfully with the following name '{}'...".format(issue_title)
))
except Exception as e:
logger.exception(set_color(
"failed to auto create the issue, got exception '{}', "
"you may manually create an issue...".format(e), level=50
))
I read online that encoding the string to utf-8 will fix the issue, however I'm not sure if that is possible here? Any help would be greatly appreciated, thank you.

You need to encode your JSON payload:
data = json.dumps(issue_data)
if sys.version_info > (3,): # Python 3
data = data.encode('utf8')
req = urllib2.Request(
url="https://api.github.com/repos/<API-REPO>/issues", data=data,
headers={"Authorization": "token {}".format(token),
"Content-Type": "application/json; charset=utf-8"}
)
I added a Content-Type header with a charset parameter to communicate the codec used to the server. This is not always needed, JSON's default codec is UTF-8. If you don't specify the header, a (wrong) default would be supplied; it depends on the server whether or not that matters.

Related

How can i parse a json response? [duplicate]

I am getting error Expecting value: line 1 column 1 (char 0) when trying to decode JSON.
The URL I use for the API call works fine in the browser, but gives this error when done through a curl request. The following is the code I use for the curl request.
The error happens at return simplejson.loads(response_json)
response_json = self.web_fetch(url)
response_json = response_json.decode('utf-8')
return json.loads(response_json)
def web_fetch(self, url):
buffer = StringIO()
curl = pycurl.Curl()
curl.setopt(curl.URL, url)
curl.setopt(curl.TIMEOUT, self.timeout)
curl.setopt(curl.WRITEFUNCTION, buffer.write)
curl.perform()
curl.close()
response = buffer.getvalue().strip()
return response
Traceback:
File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "/Users/nab/Desktop/pricestore/pricemodels/views.py" in view_category
620. apicall=api.API().search_parts(category_id= str(categoryofpart.api_id), manufacturer = manufacturer, filter = filters, start=(catpage-1)*20, limit=20, sort_by='[["mpn","asc"]]')
File "/Users/nab/Desktop/pricestore/pricemodels/api.py" in search_parts
176. return simplejson.loads(response_json)
File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/__init__.py" in loads
455. return _default_decoder.decode(s)
File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/decoder.py" in decode
374. obj, end = self.raw_decode(s)
File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/decoder.py" in raw_decode
393. return self.scan_once(s, idx=_w(s, idx).end())
Exception Type: JSONDecodeError at /pricemodels/2/dir/
Exception Value: Expecting value: line 1 column 1 (char 0)
Your code produced an empty response body, you'd want to check for that or catch the exception raised. It is possible the server responded with a 204 No Content response, or a non-200-range status code was returned (404 Not Found, etc.). Check for this.
Note:
There is no need to use simplejson library, the same library is included with Python as the json module.
There is no need to decode a response from UTF8 to unicode, the simplejson / json .loads() method can handle UTF8 encoded data natively.
pycurl has a very archaic API. Unless you have a specific requirement for using it, there are better choices.
Either the requests or httpx offers much friendlier APIs, including JSON support. If you can, replace your call with:
import requests
response = requests.get(url)
response.raise_for_status() # raises exception when not a 2xx response
if response.status_code != 204:
return response.json()
Of course, this won't protect you from a URL that doesn't comply with HTTP standards; when using arbirary URLs where this is a possibility, check if the server intended to give you JSON by checking the Content-Type header, and for good measure catch the exception:
if (
response.status_code != 204 and
response.headers["content-type"].strip().startswith("application/json")
):
try:
return response.json()
except ValueError:
# decide how to handle a server that's misbehaving to this extent
Be sure to remember to invoke json.loads() on the contents of the file, as opposed to the file path of that JSON:
json_file_path = "/path/to/example.json"
with open(json_file_path, 'r') as j:
contents = json.loads(j.read())
I think a lot of people are guilty of doing this every once in a while (myself included):
contents = json.load(json_file_path)
Check the response data-body, whether actual data is present and a data-dump appears to be well-formatted.
In most cases your json.loads- JSONDecodeError: Expecting value: line 1 column 1 (char 0) error is due to :
non-JSON conforming quoting
XML/HTML output (that is, a string starting with <), or
incompatible character encoding
Ultimately the error tells you that at the very first position the string already doesn't conform to JSON.
As such, if parsing fails despite having a data-body that looks JSON like at first glance, try replacing the quotes of the data-body:
import sys, json
struct = {}
try:
try: #try parsing to dict
dataform = str(response_json).strip("'<>() ").replace('\'', '\"')
struct = json.loads(dataform)
except:
print repr(resonse_json)
print sys.exc_info()
Note: Quotes within the data must be properly escaped
With the requests lib JSONDecodeError can happen when you have an http error code like 404 and try to parse the response as JSON !
You must first check for 200 (OK) or let it raise on error to avoid this case.
I wish it failed with a less cryptic error message.
NOTE: as Martijn Pieters stated in the comments servers can respond with JSON in case of errors (it depends on the implementation), so checking the Content-Type header is more reliable.
Check encoding format of your file and use corresponding encoding format while reading file. It will solve your problem.
with open("AB.json", encoding='utf-8', errors='ignore') as json_data:
data = json.load(json_data, strict=False)
I had the same issue trying to read json files with
json.loads("file.json")
I solved the problem with
with open("file.json", "r") as read_file:
data = json.load(read_file)
maybe this can help in your case
A lot of times, this will be because the string you're trying to parse is blank:
>>> import json
>>> x = json.loads("")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
You can remedy by checking whether json_string is empty beforehand:
import json
if json_string:
x = json.loads(json_string)
else:
# Your code/logic here
x = {}
I encounterred the same problem, while print out the json string opened from a json file, found the json string starts with '', which by doing some reserach is due to the file is by default decoded with UTF-8, and by changing encoding to utf-8-sig, the mark out is stripped out and loads json no problem:
open('test.json', encoding='utf-8-sig')
This is the minimalist solution I found when you want to load json file in python
import json
data = json.load(open('file_name.json'))
If this give error saying character doesn't match on position X and Y, then just add encoding='utf-8' inside the open round bracket
data = json.load(open('file_name.json', encoding='utf-8'))
Explanation
open opens the file and reads the containts which later parse inside json.load.
Do note that using with open() as f is more reliable than above syntax, since it make sure that file get closed after execution, the complete sytax would be
with open('file_name.json') as f:
data = json.load(f)
There may be embedded 0's, even after calling decode(). Use replace():
import json
struct = {}
try:
response_json = response_json.decode('utf-8').replace('\0', '')
struct = json.loads(response_json)
except:
print('bad json: ', response_json)
return struct
I had the same issue, in my case I solved like this:
import json
with open("migrate.json", "rb") as read_file:
data = json.load(read_file)
I was having the same problem with requests (the python library). It happened to be the accept-encoding header.
It was set this way: 'accept-encoding': 'gzip, deflate, br'
I simply removed it from the request and stopped getting the error.
Just check if the request has a status code 200. So for example:
if status != 200:
print("An error has occured. [Status code", status, "]")
else:
data = response.json() #Only convert to Json when status is OK.
if not data["elements"]:
print("Empty JSON")
else:
"You can extract data here"
I had exactly this issue using requests.
Thanks to Christophe Roussy for his explanation.
To debug, I used:
response = requests.get(url)
logger.info(type(response))
I was getting a 404 response back from the API.
In my case I was doing file.read() two times in if and else block which was causing this error. so make sure to not do this mistake and hold contain in variable and use variable multiple times.
In my case it occured because i read the data of the file using file.read() and then tried to parse it using json.load(file).I fixed the problem by replacing json.load(file) with json.loads(data)
Not working code
with open("text.json") as file:
data=file.read()
json_dict=json.load(file)
working code
with open("text.json") as file:
data=file.read()
json_dict=json.loads(data)
For me, it was not using authentication in the request.
For me it was server responding with something other than 200 and the response was not json formatted. I ended up doing this before the json parse:
# this is the https request for data in json format
response_json = requests.get()
# only proceed if I have a 200 response which is saved in status_code
if (response_json.status_code == 200):
response = response_json.json() #converting from json to dictionary using json library
I received such an error in a Python-based web API's response .text, but it led me here, so this may help others with a similar issue (it's very difficult to filter response and request issues in a search when using requests..)
Using json.dumps() on the request data arg to create a correctly-escaped string of JSON before POSTing fixed the issue for me
requests.post(url, data=json.dumps(data))
In my case it is because the server is giving http error occasionally. So basically once in a while my script gets the response like this rahter than the expected response:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<h1>502 Bad Gateway</h1>
<p>The proxy server received an invalid response from an upstream server.<hr/>Powered by Tengine</body>
</html>
Clearly this is not in json format and trying to call .json() will yield JSONDecodeError: Expecting value: line 1 column 1 (char 0)
You can print the exact response that causes this error to better debug.
For example if you are using requests and then simply print the .text field (before you call .json()) would do.
I did:
Open test.txt file, write data
Open test.txt file, read data
So I didn't close file after 1.
I added
outfile.close()
and now it works
If you are a Windows user, Tweepy API can generate an empty line between data objects. Because of this situation, you can get "JSONDecodeError: Expecting value: line 1 column 1 (char 0)" error. To avoid this error, you can delete empty lines.
For example:
def on_data(self, data):
try:
with open('sentiment.json', 'a', newline='\n') as f:
f.write(data)
return True
except BaseException as e:
print("Error on_data: %s" % str(e))
return True
Reference:
Twitter stream API gives JSONDecodeError("Expecting value", s, err.value) from None
if you use headers and have "Accept-Encoding": "gzip, deflate, br" install brotli library with pip install. You don't need to import brotli to your py file.
In my case it was a simple solution of replacing single quotes with double.
You can find my answer here

How to recover an Algorand Wallet with Python AlgoSDK?

Although the official algosdk (Python SDK for Algorand) documentation suggests that a wallet can be recovered by simply invoking the following function (link):
create_wallet(name, pswd, driver_name='sqlite', master_deriv_key=None)
with the fourth argument:
master_deriv_key (str, optional) – if recovering a wallet, include
wallet recovery does not work in my code and leads to experience an exception as well. Also the official Algorand documentation shows how to use the abovementioned function for recovering a wallet (link):
# recover the wallet by passing mdk when creating a wallet
new_wallet = kcl.create_wallet("MyTestWallet2", "testpassword", master_deriv_key=mdk)
Below, you can watch at my code, a very simple snippet that I coded to make some tests with Algorand SDK:
from algosdk import kmd
from algosdk import mnemonic
kmd_clt = kmd.KMDClient('855d39510cce40caf11de4c941b37632d1529ec970156214528a33a0ae8473b4', 'http://127.0.0.1:6969')
if kmd_clt:
kmd_wlt_mdk = None
kmd_wlt_list = kmd_clt.list_wallets()
for kmd_wlt in kmd_wlt_list:
kmd_name = kmd_wlt['name']
kmd_id = kmd_wlt['id']
if kmd_name == 'wallet_name':
kmd_wlt_hdl = kmd_clt.init_wallet_handle(kmd_id, 'wallet_password')
if kmd_wlt_hdl:
kmd_wlt_mdk = kmd_clt.export_master_derivation_key(kmd_wlt_hdl, 'wallet_password')
break
if kmd_wlt_mdk:
kmd_wlt = kmd_clt.create_wallet('wallet_name', 'wallet_password', master_deriv_key=kmd_wlt_mdk)
kmd_wlt_hdl = kmd_clt.init_wallet_handle(kmd_wlt['id'], 'wallet_password')
acc_addr_list = kmd_clt.list_keys(kmd_wlt_hdl)
for acc_addr in acc_addr_list:
account_address = acc_addr
print(account_address)
account_key = kmd_clt.export_key(kmd_wlt_hdl, 'wallet_password', account_address)
print(account_key)
account_mnemonic = mnemonic.from_private_key(account_key)
print(account_mnemonic)
Below, you can watch at the Traceback and the error message returned at run-time:
Traceback (most recent call last):
File "/home/emiliano/anaconda3/lib/python3.7/site-packages/algosdk/kmd.py", line 63, in kmd_request
resp = urlopen(req)
File "/home/emiliano/anaconda3/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/home/emiliano/anaconda3/lib/python3.7/urllib/request.py", line 531, in open
response = meth(req, response)
File "/home/emiliano/anaconda3/lib/python3.7/urllib/request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "/home/emiliano/anaconda3/lib/python3.7/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/home/emiliano/anaconda3/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/home/emiliano/anaconda3/lib/python3.7/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/emiliano/anaconda3/lib/python3.7/site-packages/algosdk/kmd.py", line 67, in kmd_request
raise error.KMDHTTPError(json.loads(e)["message"])
algosdk.error.KMDHTTPError: wallet with same name already exists
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "algorand_test.py", line 49, in <module>
kmd_wlt = kmd_clt.create_wallet('emiliano', 'emiliano', master_deriv_key=kmd_wlt_mdk)
File "/home/emiliano/anaconda3/lib/python3.7/site-packages/algosdk/kmd.py", line 118, in create_wallet
return self.kmd_request("POST", req, data=query)["wallet"]
File "/home/emiliano/anaconda3/lib/python3.7/site-packages/algosdk/kmd.py", line 69, in kmd_request
raise error.KMDHTTPError(e)
algosdk.error.KMDHTTPError: {
"error": true,
"message": "wallet with same name already exists"
}
It seems to be clear how the create_wallet function is the sinner of such behavior which leads to get back the error "wallet with same name already exists". Internals of Algorand SDK are very simple, APIs are wrappers for REST methods. Function create_wallet does simply (link):
def create_wallet(self, name, pswd, driver_name="sqlite",
master_deriv_key=None):
"""
Create a new wallet.
Args:
name (str): wallet name
pswd (str): wallet password
driver_name (str, optional): name of the driver
master_deriv_key (str, optional): if recovering a wallet, include
Returns:
dict: dictionary containing wallet information
"""
req = "/wallet"
query = {
"wallet_driver_name": driver_name,
"wallet_name": name,
"wallet_password": pswd
}
if master_deriv_key:
query["master_derivation_key"] = master_deriv_key
return self.kmd_request("POST", req, data=query)["wallet"]
I'm sure the master derivation key passed in input is correct since I've alredy checked it with the goal command from console.
Has anyone else experienced this type of problem before?
To summarize, Algorand documentation of REST APIs doesn't suggest explicitly to use the Master-Derivation-Key to retrieve a Wallet when making a POST /v1/wallet (link). Conversely, Algorand documentation of Python SDK suggests that the Master-Derivation-Key can be passed to the create_wallet function, which then makes the HTTP POST stated before, to recover an existing Wallet (link).
As explained within my question above, create_wallet fails to recover the Wallet because the underlying POST /v1/wallet fails. At the suggestion of #Arty, this has been proved as follow:
curl -X POST -H "X-KMD-API-Token: <kmd-token>" -H "Content-Type: application/json" -d '{"wallet_driver_name": "sqlite", "wallet_name": <wallet-name>, "wallet_password": <wallet-password>, "master_derivation_key": <master-derivation-key>}' <kmd-address-and-port>/v1/wallet
which returned
{ "error": true, "message": "wallet with same name already exists" }
I notified this problem to the Algorand support and I'm currently waiting for a reply. Anyhow, in order to give some sense to the question's title, I want to share another possible solution to recover a Wallet still by relying on the Python SDK:
from algosdk import kmd
from algosdk import wallet
from algosdk import mnemonic
kmd_clt = kmd.KMDClient(<kmd-token>, <kmd-address-and-port>)
if kmd_clt:
kmd_wlt_mdk = None
kmd_wlt_list = kmd_clt.list_wallets()
for kmd_wlt in kmd_wlt_list:
kmd_name = kmd_wlt['name']
kmd_id = kmd_wlt['id']
if kmd_name == <wallet-name>:
kmd_wlt_hdl = kmd_clt.init_wallet_handle(kmd_id, <wallet-password>)
if kmd_wlt_hdl:
kmd_wlt_mdk = kmd_clt.export_master_derivation_key(kmd_wlt_hdl, <wallet-password>)
break
if kmd_wlt_mdk:
wlt = wallet.Wallet(<wallet-name>, <wallet-password>, kmd_clt, mdk=kmd_wlt_mdk)
if wlt:
acc_addr_list = wlt.list_keys()
for acc_addr in acc_addr_list:
account_address = acc_addr
print(account_address)
account_key = wlt.export_key(acc_addr)
print(account_key)
account_mnemonic = mnemonic.from_private_key(account_key)
print(account_mnemonic)
I hope it will be useful to someone else in the future.

Python API troubles

I've been tasked with learning Python on the spot and feel like I'm drowning. I am trying to translate what was provided by a coworker but am really struggling. The API I am trying to work with is here: https://dev.skuvault.com/v1.0/reference#getonlinesalestatus
And the code I have is:
import requests, json
# Skuvault URIs and Token
SkuBase = "https://app.skuvault.com/api/sales/getOnlineSaleStatus"
SkuProductsUri = SkuBase + "Products(id)/Attributes('name')"
SkuAuthToken = ""
print "[+] Requesting: " + SkuProductsUri
response = requests.post(SkuProductsUri, headers={'Authorization': 'Bearer ' + SkuAuthToken})
productsJson = json.loads(response.status_code)
print "[*] Status: %d\n[*] Reason: %s\n[*] Message: %s\n[*] Raw: %s\n\n" \
% (response.status_code, response.reason, productsJson['Message'], response.text[:300])
I'm receiving the following error when trying to run the script
[+] Requesting: https://app.skuvault.com/api/sales/getOnlineSaleStatusProducts(id)/Attributes('name')
Traceback (most recent call last):
File "test-api.py", line 11, in
productsJson = json.loads(response.status_code)
File "/usr/local/Cellar/python#2/2.7.14_3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/init.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/local/Cellar/python#2/2.7.14_3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
Like I said, I'd literally never worked with Python before this and feel a bit overwhelmed. Thanks.
You're not calling the API correctly. Notice that the API expects you to POST a request, and it expects the request payload to look like
{
"OrderIds": [
"my-id"
],
"TenantToken": "my-tenant-token",
"UserToken": "my-user-token"
}
If I were doing similar, I might do:
sku_base = "https://app.skuvault.com/api/sales/getOnlineSaleStatus"
response = requests.post(
sku_base,
json={
"OrderIds": [ "my-order-id" ],
"TenantToken": "my-tenant-token",
"UserToken": "my-user-token"
}
)
# NOTE: you are not using the status code here. That's an int and will error.
productsJson = json.loads(response.text)
# This is also valid and will result in the same object:
productsJson = response.json()
print "[*] Status: %d\n[*] Reason: %s\n[*] Message: %s\n[*] Raw: %s\n\n" \
% (response.status_code, response.reason, productsJson['Message'], response.text[:300])
Side note:
Python 2 is no longer maintained. It is recommended that you use Python 3 exclusively.

http.client.BadStatusLine: '' when attempting app-only twitter oauth with python3

I am trying to create a script that will process Twitter streams. Unfortunately, the OAuth process has stymied me. Adopting some code I found on the internet, I receive a blank response from https://api.twitter.com/oauth/token. In order to better understand the process, I am trying to do this without special modules. Here is my code, what am missing? Any help would be greatly appreciated.
Code:
import http.client
import urllib
import base64
CONSUMER_KEY = 'yadayadayada'
CONSUMER_SECRET = 'I am really tired today'
encoded_CONSUMER_KEY = urllib.parse.quote(CONSUMER_KEY)
encoded_CONSUMER_SECRET = urllib.parse.quote(CONSUMER_SECRET)
concat_consumer_url = encoded_CONSUMER_KEY + ':' + encoded_CONSUMER_SECRET
host = 'api.twitter.com'
url = '/oauth2/token/'
params = urllib.parse.urlencode({'grant_type' : 'client_credentials'})
req = http.client.HTTPSConnection(host, timeout = 100)
req.set_debuglevel(1)
req.putrequest("POST", url)
req.putheader("Host", host)
req.putheader("User-Agent", "My Twitter 1.1")
req.putheader("Authorization", "Basic %s" % base64.b64encode(b'concat_consumer_url'))
req.putheader("Content-Type", "application/x-www-form-urlencoded;charset=UTF-8")
req.putheader("Content-Length", "29")
req.putheader("Accept-Encoding", "identity")
req.endheaders()
req.send(b'params')
resp = req.getresponse()
print ("{} {}".format(resp.status, resp.reason))
Error message:
C:\Python33>app_only_test_klug.py
Traceback <most recent call last>:
File "C:\Python33\app_only_test_klug.py", line 31, in <module>
resp = req.getresponse()
File "C:\Python33\lib\http\client.py", line 1131, in getresponse
response.being()
File "C:\Python33\lib\http\client.py", line 354, in begin
version, status, reason = self._read_status()
File "C:\Python33\lib\http\client.py", line 324, in _read_status
raise BadStatusLine(line)
http.client.BadStatusLine: ''
Any help would be greatly appreciated.
UPDATE:
After some more tinkering, I believe that the issue lies with my base64 encoding:
req.putheader("Authorization", "Basic %s" % base64.b64encode(b'concat_consumer_url'))
When I decode the resulting encoding of the above, I get "b'concat_consumer_url'" rather than a concatenation of the encoded_CONSUMER_KEY and encoded_CONSUMER_SECRET combined around a colon. How do I get the base64 to b64encode the value that concat_comsumer_url represents rather than the string "concat_consumer_url" so that I can move forward? Thanks in advance.
I believe the issue is there as well--you should just pass the variable to the encoding function, rather than the name of the variable as bytes, like this:
req.putheader("Authorization", "Basic %s" % base64.b64encode(concat_consumer_url))
Try it again with that change.

How to query a restful webservice using Python

Writing a Python script that uses Requests lib to fire off a request to a remote webservice. Here is my code (test.py):
import logging.config
from requests import Request, Session
logging.config.fileConfig('../../resources/logging.conf')
logr = logging.getLogger('pyLog')
url = 'https://158.74.36.11:7443/hqu/hqapi1/user/get.hqu'
token01 = 'hqstatus_python'
token02 = 'ytJFRyV7g'
response_length = 351
def main():
try:
logr.info('start SO example')
s = Session()
prepped = Request('GET', url, auth=(token01, token02), params={'name': token01}).prepare()
response = s.send(prepped, stream=True, verify=False)
logr.info('status: ' + str(response.status_code))
logr.info('elapsed: ' + str(response.elapsed))
logr.info('headers: ' + str(response.headers))
logr.info('content: ' + response.raw.read(response_length).decode())
except Exception:
logr.exception("Exception")
finally:
logr.info('stop')
if __name__ == '__main__':
main()
I get the following successful output when i run this:
INFO test - start SO example
INFO test - status: 200
INFO test - elapsed: 0:00:00.532053
INFO test - headers: CaseInsensitiveDict({'server': 'Apache-Coyote/1.1', 'set-cookie': 'JSESSIONID=8F87A69FB2B92F3ADB7F8A73E587A10C; Path=/; Secure; HttpOnly', 'content-type': 'text/xml;charset=UTF-8', 'transfer-encoding': 'chunked', 'date': 'Wed, 18 Sep 2013 06:34:28 GMT'})
INFO test - content: <?xml version="1.0" encoding="utf-8"?>
<UserResponse><Status>Success</Status> .... </UserResponse>
INFO test - stop
As you can see, there is this weird variable 'response_length' that i need to pass to the response object (optional argument) to be able to read the content. This variable has to be set to a numeric value that is equal to length of the 'content'. This obviously means that i need to know the response-content-length before hand, which is unreasonable.
If i don't pass that variable or set it to a value greater than the content length, I get the following error:
Traceback (most recent call last):
File "\Python33\lib\http\client.py", line 590, in _readall_chunked
chunk_left = self._read_next_chunk_size()
File "\Python33\lib\http\client.py", line 562, in _read_next_chunk_size
return int(line, 16)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb4 in position 0: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 22, in main
logr.info('content: ' + response.raw.read().decode())
File "\Python33\lib\site-packages\requests\packages\urllib3\response.py", line 167, in read
data = self._fp.read()
File "\Python33\lib\http\client.py", line 509, in read
return self._readall_chunked()
File "\Python33\lib\http\client.py", line 594, in _readall_chunked
raise IncompleteRead(b''.join(value))
http.client.IncompleteRead: IncompleteRead(351 bytes read)
How do i make this work without this 'response_length' variable?
Also, are there any better options than 'Requests' lib?
PS: this code is an independent script, and does not run in the Django framework.
Use the public API instead of internals and leave worrying about content length and reading to the library:
import requests
s = requests.Session()
s.verify = False
s.auth = (token01, token02)
resp = s.get(url, params={'name': token01}, stream=True)
content = resp.content
or, since stream=True, you can use the resp.raw file object:
for line in resp.iter_lines():
# process a line
or
for chunk in resp.iter_content():
# process a chunk
If you must have a file-like object, then resp.raw can be used (provided stream=True is set on the request, like done above), but then just use .read() calls without a length to read to EOF.
If you are however, not querying a resource that requires you to stream (anything but a large file request, a requirement to test headers first, or a web service that is explicitly documented as a streaming service), just leave off the stream=True and use resp.content or resp.text for byte or unicode response data.
In the end, however, it appears your server is sending chunked responses that are malformed or incomplete; a chunked transfer encoding includes length information for each chunk and the server appears to be lying about a chunk length or sending too little data for a given chunk. The decode error is merely the result of incomplete data having been sent.
The server you request use "chunked" transfer encoding so there is not a content-length header. A raw response in chunked transfer encoding contains not only actual content but also chunks, a chunk is a number in hex followed by "\r\n" and it always cause xml or json parser error.
try use:
response.raw.read(decode_content=True)

Categories

Resources