Python & Json - Ebay Api Upload image Error - python

I've been trying to upload a png image to the Ebay Api with the return_file_upload call:
http://developer.ebay.com/Devzone/post-order/post-order_v2_return-returnId_file_upload__post.html#Samples
It's weird because the documentation says it accepts an array for the data parameter but the samples doesn't use arrays. When I tried using an array I got a Can not deserialize instance of byte out of VALUE_STRING at [Source: java.io.SequenceInputStream#4d57f134; line: 1, column: 11] (through reference chain: com.ebay.marketplace.returns.v3.services.request.UploadFileRequest["data"])
This is my code:
import json
import base64
import requests
with open("take_full_login.png", "rb") as image_file:
encoded_string = base64.encodestring(image_file.read())
url2 = 'https://api.ebay.com/post-order/v2/return/123456/file/upload'
payload2 = {
"data" : encoded_string,
"filePurpose" : "LABEL_RELATED"
}
requests.post(url=url2, data=json.dumps(payload2), headers=headers)
That currently outputs
{"error":[{"errorId":1616,"domain":"returnErrorDomain","severity":"ERROR","category":"REQUEST","message":"Invalid Input.","parameter":[{"value":"data","name":"parameter"}],"longMessage":"Invalid Input.","httpStatusCode":400}]}

Try replacing data=json.dumps(payload2) by json=payload2
The call /post-order/v2/cancellation/check_eligibility only worked that way for me

Related

JSONDecodeError: Expecting value: line 1 column 1 (char 0) / While json parameter include

I'm trying to retrieve data from https://clinicaltrials.gov/ and althought I've specified the format as Json in the request parameter:
fmt=json
the returned value is txt by default.
As a consequence i'm not able to retrieve the response in json()
Good:
import requests
response = requests.get('https://clinicaltrials.gov/api/query/study_fields?expr=heart+attack&fields=NCTId%2CBriefTitle%2CCondition&min_rnk=1&max_rnk=&fmt=json')
response.text
Not Good:
import requests
response = requests.get('https://clinicaltrials.gov/api/query/study_fields?expr=heart+attack&fields=NCTId%2CBriefTitle%2CCondition&min_rnk=1&max_rnk=&fmt=json')
response.json()
Any idea how to turn this txt to json ?
I've tried with response.text which is working but I want to retrieve data in Json()
You can use following code snippet:
import requests, json
response = requests.get('https://clinicaltrials.gov/api/query/study_fields?expr=heart+attack&fields=NCTId%2CBriefTitle%2CCondition&min_rnk=1&max_rnk=&fmt=json')
jsonResponse = json.loads(response.content)
You should use the JSON package (that is built-in python, so you don't need to install anything), that will convert the text into a python object (dictionary) using the json.loads() function. Here you can find some examples.

Finding specific string and printing from collected data of a site using python

Here, i am trying to do that, i have a data from ngrok tunnel (http://127.0.0.1//api/tunnels) and i want to print only 'public_url' : 'https://.....ngrok.io' which i have collected from that site, that data looks like this
{'tunnels': [{'name': 'command_line', 'uri': '/api/tunnels/command_line', 'public_url': 'https://a28e4c77.ngrok.io', 'proto': 'https', 'config': {'addr': 'http://localhost:80', 'inspect': True}....Something more
This is the part of that data.
I have use this code to collect that data.
import requests
url = "http://127.0.0.1:4040/api/tunnels"
r = requests.get(url)
data = r.json()
I have also save this into a ngrok.txt but i have absolutely no idea to find...To write this data i use this code : -
import requests
url = "http://127.0.0.1:4040/api/tunnels"
r = requests.get(url)
data = r.json()
f = open('ngrok.txt', 'w')
f.write(data)
f.close()
You need to convert your json string to a json object. You can do it with function loads() from json library.
Here the code for your example:
import json
json.loads(data)["tunnels"][0]["public_url"]
json.loads(data) converts a string to a json object
["tunnels"] gets the object associated with the name "tunnels"
The resulting object is a list, indeed you need to get the first element with [0]
Finally you get "public_url"

Parse POSTed Excel file in python

Sorry, I am a noob when it comes to web. I am trying to send an excel file using the API gateway and process it to write to S3 using a lambda in python. I am sending the file as "application/octet-stream" and parsing after I get the event object as follows:
import io
import cgi
import pandas as pd
import xlrd
def read_file(event):
c_type, c_data = parse_header(event['headers']['Content-Type'])
encoded_file = event['body'].encode('utf-8')
c_data['boundary'] = bytes(c_data['boundary'], "utf-8")
parsed_body = cgi.parse_multipart(io.BytesIO(encoded_file), c_data)
return(parsed_body)
this essentially should give me a io.BytesIO stream which I should be able to read as
df = pd.ExcelFile(list(parsed_body.values())[0][0], engine = 'xlrd')
the function read_file() will be called by the lambda_handler as
def lambda_handler(event, context):
p_body = read_file(event)
df = pd.ExcelFile(list(parsed_body.values())[0][0], engine = 'xlrd')
# Some post processing to the df
I am failing at the point where pandas cannot read this parsed_body. I also tried the multipart library by that too did not give me a result.
If anyone can show me a method to parse the event body and give me a result I would be greteful.
The error that I get is
File "<ipython-input-264-dfd56a631cc4>", line 1, in <module>
cgi.parse_multipart(event_bytes, c_data)
File
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/cgi.py",line 261, in parse_multipart
line = fp.readline()
AttributeError: 'bytes' object has no attribute 'readline'
I finally found an answer, use base64 encoding from cURL and pass the data to the API like this
curl -H 'Content-Type:application/octet-stream' --data-binary '{"file": "'"$(base64 /Path/to/file)"'"}' 'https://someAPI.com/some/path?param1=value1\&param2=value2'
with this the API gateway receives a json in the body with the structure {"file": "Base64 encoded string here"}
Once you have this body first get the base64 encoded string as
eventBody = base64.b64decode(json.loads(event['body'])['file'])
Now create an empty stream and write this decoded string into the stream. Also set the seek position to 0
toread=io.BytesIO()
toread.write(eventBody)
toread.seek(0)
Finally just pass this stream to pandas
df=pd.read_excel(toread, sheet_name=sn)
And it worked.

python unable to parse JSON Data

I am unable to parse the JSON data using python.
A webpage url is returning JSON Data
import requests
import json
BASE_URL = "https://www.codechef.com/api/ratings/all"
data = {'page': page, 'sortBy':'global_rank', 'order':'asc', 'itemsPerPage':'40' }
r = requests.get(BASE_URL, data = data)
receivedData = (r.text)
print ((receivedData))
when I printed this, I got large text and when I validated using https://jsonlint.com/ it showed VALID JSON
Later I used
import requests
import json
BASE_URL = "https://www.codechef.com/api/ratings/all"
data = {'page': page, 'sortBy':'global_rank', 'order':'asc', 'itemsPerPage':'40' }
r = requests.get(BASE_URL, data = data)
receivedData = (r.text)
print (json.loads(receivedData))
When I validated the large printed text using https://jsonlint.com/ it showed INVALID JSON
Even if I don't print and directly use the data. It is working properly. So I am sure even internally it is not loading correctly.
is python unable to parse the text to JSON properly?
in short, json.loads converts from a Json (thing, objcet, array, whatever) into a Python object - in this case, a Json Dictionary. When you print that, it will print as a itterative and therefore print with single quotes..
Effectively your code can be expanded:
some_dictionary = json.loads(a_string_which_is_a_json_object)
print(some_dictionary)
to make sure that you're printing json-safe, you would need to re-encode with json.dumps
When you use python's json.loads(text) it returns a python dictionary. When you print that dictionary out it is not in json format.
If you want a json output you should use json.dumps(json_object).

Python3 error: initial_value must be str or None, with StringIO

While porting code from python2 to 3, I get this error when reading from a URL
TypeError: initial_value must be str or None, not bytes.
import urllib
import json
import gzip
from urllib.parse import urlencode
from urllib.request import Request
service_url = 'https://babelfy.io/v1/disambiguate'
text = 'BabelNet is both a multilingual encyclopedic dictionary and a semantic network'
lang = 'EN'
Key = 'KEY'
params = {
'text' : text,
'key' : Key,
'lang' :'EN'
}
url = service_url + '?' + urllib.urlencode(params)
request = Request(url)
request.add_header('Accept-encoding', 'gzip')
response = urllib.request.urlopen(request)
if response.info().get('Content-Encoding') == 'gzip':
buf = StringIO(response.read())
f = gzip.GzipFile(fileobj=buf)
data = json.loads(f.read())
The exception is thrown at this line
buf = StringIO(response.read())
If I use python2, it works fine.
response.read() returns an instance of bytes while StringIO is an in-memory stream for text only. Use BytesIO instead.
From What's new in Python 3.0 - Text Vs. Data Instead Of Unicode Vs. 8-bit
The StringIO and cStringIO modules are gone. Instead, import the io module and use io.StringIO or io.BytesIO for text and data respectively.
This looks like another python3 bytes vs. str problem. Your response is of type bytes (which is different in python 3 from str). You need to get it into a string first using response.read().decode('utf-8') say and then use StringIO on it. Or you may want to use BytesIO as someone said - but if you expect it to be str, preferred way is to decode into an str first.
Consider using six.StringIO instead of io.StringIO.
And if you are migrating code from python2 to python3 and using suds old version use "suds-py3" for python3

Categories

Resources