I'm trying to use FindingAPI but im getting this error on with PROD credentials. I think the problem is within the API.
My code is just a simple:
try:
api = Connection(config_file='ebay.dev.yml', domain="api.ebay.com", debug=True, iteid='EBAY-US', escape_xml=False)
request = {
'keywords': "go pro 8",
'itemFilter': [
{'name': 'Condition', 'value': 'used'},
{'name': 'SoldItemsOnly', 'value': 'true'}
],
'paginationInput': {
'entriesPerPage': 1,
'pageNumber': 1
},
'sortOrder': 'PricePlusShippingLowest'
}
response = api.execute('findCompletedItems', request)
print(response)
except ConnectionError as e:
print(e)
print(e.response.dict())
This gives me this error:
2020-11-13 06:02:36,022 ebaysdk [DEBUG]:status code=202
2020-11-13 06:02:36,022 ebaysdk [DEBUG]:headers={'Date': 'Thu, 12 Nov 2020 22:02:36 GMT', 'Server': 'Synapse-HttpComponents-NIO', 'Transfer-Encoding': 'chunked', 'Strict-Transport-Security': 'max-age=31536000'}
2020-11-13 06:02:36,023 ebaysdk [DEBUG]:content=
2020-11-13 06:02:36,023 ebaysdk [DEBUG]:response parse failed: Document is empty, line 1, column 1 (<string>, line 1)
2020-11-13 06:02:36,024 ebaysdk [ERROR]:findCompletedItems: Accepted
'findCompletedItems: Accepted'
{'findCompletedItemsResponse': 'parse error Document is empty, line 1, column 1 (<string>, line 1)'}
Anyone has an idea how to make this succeed? What I tried is adjusting the ebay yml, and that seems to work fine with TradingAPI calls but not with this FindingAPI for some reason. I already checked the github repo for issues related and I also coulnd'nt find one
You need to change your import.
Your import probably be "from ebaysdk.trading import Connection", you need to change it to
"from ebaysdk.finding import Connection"
Related
I am using python API to save and download model from MinIO. This is a MinIO installed on my server. The data is in binary format.
a = 'Hello world!'
a = pickle.dumps(a)
client.put_object(
bucket_name='my_bucket',
object_name='my_object',
data=io.BytesIO(a),
length=len(a)
)
I can see object saved through command line :
mc cat origin/my_bucket/my_object
Hello world!
However, when i try to get it through Python API :
response = client.get_object(
bucket_name = 'my_bucket',
object_name= 'my_object'
)
response is a urllib3.response.HTTPResponse object here.
I am trying to read it as :
response.read()
b''
I get a blank binary string. How can I read this object? It won't be possible for me to know its length at the time of reading it.
and here is response.__dict__ :
{'headers': HTTPHeaderDict({'Accept-Ranges': 'bytes', 'Content-Length': '27', 'Content-Security-Policy': 'block-all-mixed-content', 'Content-Type': 'application/octet-stream', 'ETag': '"75687-1"', 'Last-Modified': 'Fri, 16 Jul 2021 14:47:35 GMT', 'Server': 'MinIO/DEENT.T', 'Vary': 'Origin', 'X-Amz-Request-Id': '16924CCA35CD', 'X-Xss-Protection': '1; mode=block', 'Date': 'Fri, 16 Jul 2021 14:47:36 GMT'}), 'status': 200, 'version': 11, 'reason': 'OK', 'strict': 0, 'decode_content': True, 'retries': Retry(total=5, connect=None, read=None, redirect=None, status=None), 'enforce_content_length': False, 'auto_close': True, '_decoder': None, '_body': None, '_fp': <http.client.HTTPResponse object at 01e50>, '_original_response': <http.client.HTTPResponse object at 0x7e50>, '_fp_bytes_read': 0, 'msg': None, '_request_url': None, '_pool': <urllib3.connectionpool.HTTPConnectionPool object at 0x790>, '_connection': None, 'chunked': False, 'chunk_left': None, 'length_remaining': 27}
Try with response.data.decode()
The response is a urllib3.response.HTTPResponse object.
See urllib3 Documentation:
Backwards-compatible with http.client.HTTPResponse but the response body is loaded and decoded on-demand when the data property is accessed.
Specifically, you should read the answer like this:
response.data # len(response.data)
Or, if you want to stream the object, you have examples on the minio-py repository: examples/get_objects.
I have a rather basic bit of code. Basically what it does is sends an API request to a locally hosted Server and returns a JSON string. I'm taking that string and cracking it apart. Then I take what I need from it, make a Dictionary, and export it as an XML file with an nfo extension.
The issue is sometimes there are missing bits to the source data. Season is missing fairly frequently for example. It breaks the Data Mapping. I need a way to handle that. For somethings I may want to exclude the data and for others I need a sane default value.
#!/bin/env python
import os
import requests
import re
import json
import dicttoxml
import xml.dom.minidom
from xml.dom.minidom import parseString
# Grab Shoko Auth Key
apiheaders = {
'Content-Type': 'application/json',
'Accept': 'application/json',
}
apidata = '{"user": "Default", "pass": "", "device": "CLI"}'
r = requests.post('http://192.168.254.100:8111/api/auth',
headers=apiheaders, data=apidata)
key = json.loads(r.text)['apikey']
# Grabbing Episode Data
EpisodeHeaders = {
'accept': 'text/plain',
'apikey': key
}
EpisodeParams = (
('filename',
"FILE HERE"),
('pic', '1'),
)
fileinfo = requests.get(
'http://192.168.254.100:8111/api/ep/getbyfilename', headers=EpisodeHeaders, params=EpisodeParams)
# Mapping Data from Shoko to Jellyfin NFO
string = json.loads(fileinfo.text)
print(string)
eplot = json.loads(fileinfo.text)['summary']
etitle = json.loads(fileinfo.text)['name']
eyear = json.loads(fileinfo.text)['year']
episode = json.loads(fileinfo.text)['epnumber']
season = json.loads(fileinfo.text)['season']
aid = json.loads(fileinfo.text)['aid']
seasonnum = season.split('x')
# Create Dictionary From Mapped Data
show = {
"plot": eplot,
"title": etitle,
"year": eyear,
"episode": episode,
"season": seasonnum[0],
}
Here is some example output when the code crashes
{'type': 'ep', 'eptype': 'Credits', 'epnumber': 1, 'aid': 10713, 'eid': 167848,
'id': 95272, 'name': 'Opening', 'summary': 'Episode Overview not Available',
'year': '2014', 'air': '2014-11-23', 'rating': '10.00', 'votes': '1',
'art': {'fanart': [{'url': '/api/v2/image/support/plex_404.png'}],
'thumb': [{'url': '/api/v2/image/support/plex_404.png'}]}}
Traceback (most recent call last):
File "/home/fletcher/Documents/Shoko-Jellyfin-NFO/Xml3.py", line 48, in <module>
season = json.loads(fileinfo.text)['season']
KeyError: 'season'
The solution based on what Mahori suggested. Worked perfectly.
eplot = json.loads(fileinfo.text).get('summary', None)
etitle = json.loads(fileinfo.text).get('name', None)
eyear = json.loads(fileinfo.text).get('year', None)
episode = json.loads(fileinfo.text).get('epnumber', None)
season = json.loads(fileinfo.text).get('season', '1x1')
aid = json.loads(fileinfo.text).get('aid', None)
This is fairly common scenario with web development, where you cannot always assume other party will send all keys.
The standard way to get around this is by using get instead of named fetch.
season = json.loads(fileinfo.text).get('season', None)
#you can change None to any default value here
I'm getting this message when I'm trying to test my python 3.8 lambda function:
Logs are:
soc-connect
contacts.csv
{'ResponseMetadata': {'RequestId': '9D7D7F0C5CB79984', 'HostId': 'wOd6HvIm+BpLOMKF2beRvqLiW0NQt5mK/kzjCjYxQ2kHQZY0MRCtGs3l/rqo4o0r4xAPuV1QpGM=', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-id-2': 'wOd6HvIm+BpLOMKF2beRvqLiW0NQt5mK/kzjCjYxQ2kHQZY0MRCtGs3l/rqo4o0r4xAPuV1QpGM=', 'x-amz-request-id': '9D7D7F0C5CB79984', 'date': 'Thu, 26 Mar 2020 11:21:35 GMT', 'last-modified': 'Tue, 24 Mar 2020 16:07:30 GMT', 'etag': '"8a3785e750475af3ca25fa7eab159dab"', 'accept-ranges': 'bytes', 'content-type': 'text/csv', 'content-length': '52522', 'server': 'AmazonS3'}, 'RetryAttempts': 0}, 'AcceptRanges': 'bytes', 'LastModified': datetime.datetime(2020, 3, 24, 16, 7, 30, tzinfo=tzutc()), 'ContentLength': 52522, 'ETag': '"8a3785e750475af3ca25fa7eab159dab"', 'ContentType': 'text/csv', 'Metadata': {}, 'Body': <botocore.response.StreamingBody object at 0x7f858dc1e6d0>}
1153
<_csv.reader object at 0x7f858ea76970>
[ERROR] Error: iterator should return strings, not bytes (did you open the file in text mode?)
The code snippet is:
import boto3
import csv
def digest_csv(bucket_name, key_name):
# Let's use Amazon S3
s3 = boto3.client('s3');
print(bucket_name)
print(key_name)
s3_object = s3.get_object(Bucket=bucket_name, Key=key_name)
print(s3_object)
# read the contents of the file and split it into a list of lines
lines = s3_object['Body'].read().splitlines(True)
print(len(lines))
contacts = csv.reader(lines, delimiter=';')
print(contacts)
# now iterate over those contacts
for contact in contacts:
# here you get a sequence of dicts
# do whatever you want with each line here
print('-*-'.join(contact))
I think the problem is on csv.reader.
I'm setting first parameter an array of lines... Should it be modified?
Any ideas?
Instead of using the csv.reader the following worked for me (adjusted for your delimiter and variables):
for line in lines:
contact = ''.join(line.decode().split(';'))
print(contact)
I am trying to update a value in REST API of openhab using requests.put in Python. But I am getting error 404.
My code is copied below
import requests
import json
from pprint import pprint
TemperatureA_FF_Office = 20
headers = {'Content-type': 'application/json'}
payload = {'state' : TemperatureA_FF_Office}
payld = json.dumps(payload)
re = requests.put("http://localhost:8080/rest/items/TemperatureA_FF_Office
/state/put", params= payld, headers = headers)
pprint(vars(re))
The error code I am getting is
{'_content': '',
'_content_consumed': True,
'connection': <requests.adapters.HTTPAdapter object at 7fd3b55ec9d0>,
'cookies': <<class 'requests.cookies.RequestsCookieJar'>[]>,
'elapsed': datetime.timedelta(0, 0, 4019),
'encoding': None,
'history': [],
'raw': <urllib3.response.HTTPResponse object at 0x7fd3b55ecd90>,
'reason': 'Not Found',
'request': <PreparedRequest [PUT]>,
'status_code': 404,
'url': u'http://localhost:8080/rest/items/TemperatureA_FF_Office/state/put?state=21.0'}
Is this the way to use requests.put? Please help.
Try something along these lines:
import requests
req = "http://localhost:8080/rest/items/YOUR_SENSOR_HERE/state"
val = VARIABLE_WITH_YOUR_SENSOR_DATA
try:
r = requests.put(req,data=val)
except requests.ConnectionError as e:
r = "Response Error"
print e
print r
This is a massively simplified version of what I'm using for some of my presence detection and temperature scripts.
The printing of 'r' and 'e' is useful for debug purposes and can be removed once you've got your script working properly.
I am trying to make a call to the SugarCRM v10 api to get the output of a report without having to log into the web interface and click the export button. I would like to get this report as data that can be written into csv format using python and the requests library.
I can authenticate successfully and get a token but whatever I try all I get as a response from reports is Error Method does not exist, by which they mean that you cannot use /csv at the end of the second url in this code block.
url = "https://mydomain.sugarondemand.com/rest/v10/oauth2/token"
payload = {"grant_type":"password","username":"ursername","password":"password","client_id":"sugar", "platform":"myspecialapp"}
r = requests.post(url, data=json.dumps(payload))
response = json.loads(r.text)
token = response[u'access_token']
print 'Success! OAuth token is ' + token
#What export methods are available? ###################################
#WRONG url = "https://mydomain.sugarondemand.com/rest/v10/Reports/report_id/csv"
#Following paquino's suggestion I used Base64
url = "https://mydomain.sugarondemand.com/rest/v10/Reports/report_id/Base64"
headers = { "Content-Type" : "application/json", "OAuth-Token": token }
r = requests.get(url, headers=headers);
response = r.text.decode('base64')
print response`
My question is this: what Export Methods are available via an api call to v10 of the SugarCRM api.
Edit: Using Base64 in the request url unfortunately returns ab object that I don't know how to parse...
%PDF-1.7
3 0 obj
<</Type /Page
/Parent 1 0 R
/MediaBox [0 0 792.00 612.00]
/Resources 2 0 R
/Contents 4 0 R>>
endobj
4 0 obj
<</Length 37217>>
stream
8.cܬR≈`ä║dàQöWºáW╙µ
The Reports Api accepts "Base64" and "Pdf"
Python Wrapper for SugarCRM REST API v10
https://github.com/Feverup/pysugarcrm
Quickstart
pip install pysugarcrm
from pysugarcrm import SugarCRM
api = SugarCRM('https://yourdomain.sugaropencloud.e', 'youruser', 'yourpassword')
# Return info about current user
api.me
# A more complex query requesting employees
api.get('/Employees', query_params={'max_num': 2, 'offset': 2, 'fields': 'user_name,email'})
{'next_offset': 4,
'records': [{'_acl': {'fields': {}},
'_module': 'Employees',
'date_modified': '2015-09-09T13:40:32+02:00',
'email': [{'email_address': 'John.doe#domain.com',
'invalid_email': False,
'opt_out': False,
'primary_address': True,
'reply_to_address': False}],
'id': '12364218-7d79-80e0-4f6d-35ed99a8419d',
'user_name': 'john.doe'},
{'_acl': {'fields': {}},
'_module': 'Employees',
'date_modified': '2015-09-09T13:39:54+02:00',
'email': [{'email_address': 'alice#domain.com',
'invalid_email': False,
'opt_out': False,
'primary_address': True,
'reply_to_address': False}],
'id': 'a0e117c0-9e46-aebf-f71a-55ed9a2b4731',
'user_name': 'alice'}]}
# Generate a Lead
api.post('/Leads', json={'first_name': 'John', 'last_name': 'Smith', 'business_name_c': 'Test John', 'contact_email_c': 'john#smith.com'})
from pysugarcrm import sugar_api
with sugar_api('http://testserver.com/', "admin", "12345") as api:
data = api.get('/Employees', query_params={'max_num': 2, 'offset': 2, 'fields': 'user_name,email'})
api.post('/Leads', json={'first_name': 'John', 'last_name': 'Smith', 'business_name_c': 'Test John', 'contact_email_c': 'john#smith.com'})
# Once we exit the context manager the sugar connection is closed and the user is logged out