Python PUT requests, send int instead of string - python

I need to use a PUT request from python's library requests. The XML has to be String, but I need to send the value as an Int.
import requests
bodyXML = """<?xml version='1.0' encoding='utf-8'?><parameters><value>6</value></parameters>"""
bodyHeader = {'Content-Type': 'application/xml'}
p = requests.put('http://test.pt:1891/cash', data=bodyXML, headers=bodyHeader)
The answer is:
<?xml version = "1.0" encoding = "UTF-8"?><reply-text>Value must be numeric.</reply-text>
That's an answer from the program, so I can reach the target and get the response.
If I change 6 to """ + 6 + """, I receive an error in Python
TypeError: must be str, not int
How can I camuflate the integer as String?

When you send data as part of the HTTP request body, that data will always be sent as a string (technically, it's just bytes that happen to encode a string). Furthermore XML is a text based format so that also has to be a string.
If the server does not accept the data you send, then you should talk to the service provider to ask them how to properly format the data in order for the server to accept it.
It is likely that the XML structure requires a different tag for number formats or even requires a special attribute to specify the type.
Since that is specific to your service, we won't be able to answer this for you though.

As stubborn as a programmer can be, I found a way to send values as they are using lxml to build the XML and adding a string "post_xml=" before the XML to send, works just fine:
import requests
from lxml import etree
root = etree.Element("parameters")
child = etree.SubElement(root, "value")
child.text = str(str(6))
xml = str('post_xml=') + etree.tostring(root, encoding='unicode', method='xml')
p = requests.request("PUT", 'http://test.pt:1891/cash', data=xml)
print(p.text)
The value 6 is accepted as an integer by the server.

Easy assuming you are putting the +6+ in the place of the
bodyXML = """<?xml version='1.0' encoding='utf-8'?><parameters><value>6</value></parameters>""" 6 in value. The issue is that the string line notation “””, is terminating before 6. In order to fix this just import requests
bodyXML = """<?xml version='1.0' encoding='utf-8'?><parameters><value>”””+”6”+”””</value></parameters>""" convert the int to a string via quotations or convert by encapsulating the 6 in string() like string(6).

Related

Consume web service having Byte64 array as parameter with python Zeep

I'm trying to consume a webservice with python Zeep that has a parameter of type xsd:base64Binary technical document specify type as: Byte[]
Errors are:
urllib3.exceptions.HeaderParsingError: [StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()], unparsed data: ''
and on the reply I get: Generic error "data at the root level is invalid.
I can't find the correct way to do it.
My code is:
content=open(fileName,"r").read()
encodedContent = base64.b64encode(content.encode('ascii'))
myParameter=dict(param=dict(XMLFile=encodedContent))
client.service.SendFile(**myParameter)
thanks everyone for the comments.
Mike
This is how the built-in type of Base64Binary looks like in zeep:
class Base64Binary(BuiltinType):
accepted_types = [str]
_default_qname = xsd_ns("base64Binary")
#check_no_collection
def xmlvalue(self, value):
return base64.b64encode(value)
def pythonvalue(self, value):
return base64.b64decode(value)
As you can see, it's doing the encoding and decoding by itself. You don't need to encode the file content, you have to send it as it is and zeep will encode it before putting it on the wire.
Most likely this is causing the issue. When the message element is decoded, an array of bytes is expected but another base64 string is found there.

Slice from the data (bytes) in between two strings in python

I have the data in the bytes type from the request body like the following:
b'0\x80\x06\t*\x86H\x86\xf7\r\x01\x07\x02\xa0\x800\x80\x02\x01\x011\x0b0\t\x06\x05+\x0e\x03\x02\x1a\x05\x000\x80\x06\t*\x86H\x86\xf7\r\x01\x07\x01\xa0\x80$\x80\x04\x82\x04H<?xml version="1.0" encoding="UTF-8"?>\n<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">\n<plist version="1.0">\n<dict>\n<key>PayloadContent</key>\n<dict>\n <key>URL</key>\n<string>***</string>\n <key>DeviceAttributes</key>\n<array>\n<string>UDID</string>\n<string>DEVICE_NAME</string>\n <string>VERSION</string>\n<string>PRODUCT</string>\n<string>MAC_ADDRESS_EN0</string>\n <string>IMEI</string>\n <string>ICCID</string>\n </array>\n</dict>\n<key>PayloadOrganization</key>\n<string>Flybuilds</string>\n<key>PayloadDisplayName</key>\n<string>Device Information (UDID)</string>\n<key>PayloadVersion</key>\n<integer>1</integer>\n<key>PayloadUUID</key>\n<string>*****</string>\n<key>PayloadIdentifier</key>\n<string>******</string>\n<key>PayloadDescription</key>\n<string>Knowing the UDID of my iOS device</string>\n<key>PayloadType</key>\n<string>Profile Service</string>\n</dict>\n</plist>\n\x00\x00\x00\x00\x00\x00\xa0\x82\n#0
Is it possible to extract the data between '<?xml version" and "/plist>" and write to a file in python.
(We need to extract the xml part from the bytes data)
Of course it's possible to extract it knowing the start and end signature of the content you want.
#stream is the variable holding the raw data stream (bytes)
#not repeated here for brevity
start_signature = b'<?xml'
stop_signature = b'</plist>'
xml_start = stream.find(start_signature)
xml_stop = stream.find(stop_signature) + len(stop_signature)
xml_data = stream[xml_start:xml_stop]
While I think this answers the implied question of 'how' to find the data given the start and end, the downside of this solution is that if the xml changes, the script may break. This concern may not be an issue if you know the data will be consistent each time.
If you can learn the meaning of the other bytes in the data you would likely be able to determine the start position and length of the xml without having to know the precise xml contents.

How to separate data in a Restful API?

I am working on a program that reads the content of a Restful API from ImportIO. The connection works, and data is returned, but it's a jumbled mess. I'm trying to clean it to only return Asins.
I have tried using the split keyword and delimiter to no success.
stuff = requests.get('https://data.import.io/extractor***')
stuff.content
I get the content, but I want to extract only Asins.
results
While .content gives you access to the raw bytes of the response payload, you will often want to convert them into a string using a character encoding such as UTF-8. the response will do that for you when you access .text.
response.txt
Because the decoding of bytes to str requires an encoding scheme, requests will try to guess the encoding based on the response’s headers if you do not specify one. You can provide an explicit encoding by setting .encoding before accessing .text:
If you take a look at the response, you’ll see that it is actually serialized JSON content. To get a dictionary, you could take the str you retrieved from .text and deserialize it using json.loads(). However, a simpler way to accomplish this task is to use .json():
response.json()
The type of the return value of .json() is a dictionary, so you can access values in the object by key.
You can do a lot with status codes and message bodies. But, if you need more information, like metadata about the response itself, you’ll need to look at the response’s headers.
For More Info: https://realpython.com/python-requests/
What format is the return information in? Typically Restful API's will return the data as json, you will likely have luck parsing the it as a json object.
https://realpython.com/python-requests/#content
stuff_dictionary = stuff.json()
With that, you can load the content is returned as a dictionary and you will have a much easier time.
EDIT:
Since I don't have the full URL to test, I can't give an exact answer. Given the content type is CSV, using a pandas DataFrame is pretty easy. With a quick StackOverflow search, I found the following answer: https://stackoverflow.com/a/43312861/11530367
So I tried the following in the terminal and got a dataframe from it
from io import StringIO
import pandas as pd
pd.read_csv(StringIO("HI\r\ntest\r\n"))
So you should be able to perform the following
from io import StringIO
import pandas as pd
df = pd.read_csv(StringIO(stuff.content))
If that doesn't work, consider dropping the first three bytes you have in your response: b'\xef\xbb\xf'. Check the answer from Mark Tolonen to get parse this.
After that, selecting the ASIN (your second column) from your dataframe should be easy.
asins = df.loc[:, 'ASIN']
asins_arr = asins.array
The response is the byte string of CSV content encoded in UTF-8. The first three escaped byte codes are a UTF-8-encoded BOM signature. So stuff.content.decode('utf-8-sig') should decode it. stuff.text may also work if the encoding was returned correctly in the response headers.

trouble scraping from JSONP feed

I asked a similar question earlier
python JSON feed returns string not object
but I am having a little more trouble and don't understand it.
For about half of the dates this works and returns a JSON object
for example November 9 2013 works
url = 'http://data.ncaa.com/jsonp/scoreboard/basketball-men/d1/2013/11/09/scoreboard.html?callback=c'
r = requests.get(url)
jsonObj = json.loads(r.content[2:-2])
but if I try November 11 2013:
url = 'http://data.ncaa.com/jsonp/scoreboard/basketball-men/d1/2013/11/11/scoreboard.html?callback=c'
r = requests.get(url)
jsonObj = json.loads(r.content[2:-2])
I get this error
ValueError: No JSON object could be decoded
I dont understand why. When I put both urls into a browser they look exactly the same.
The JSON in the second feed is, in fact, invalid JSON. Found this by removing the callback function and running it through: http://jsonlint.com/
To see for yourself, search for the following ID: 336252
The lines just above that ID contain two commas in a row, which is disallowed by the JSON spec.
My guess is that the server at data.ncaa.com is trying to generate JSON itself rather than using a JSON library. You should contact the site administrator and make them aware of this error.
Using demjson
demjson.decode(r.content[2:-2])
seems to work

How to convert XML to JSON in Python [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Converting XML to JSON using Python?
I am importing an XML feed and trying to convert it to JSON for output. I'm getting this error:
TypeError: <xml.dom.minidom.Document instance at 0x72787d8> is not JSON serializable
Unfortunately I know next to nothing about Python. I'm developing this on the Google App Engine. I could use some help, because my little 2 hour hack that was going so well is now on its 3rd day.
XML data:
<?xml version="1.0" ?><eveapi version="2">
<currentTime>2009-01-25 15:03:27</currentTime>
<result>
<rowset columns="name,characterID,corporationName,corporationID" key="characterID" name="characters">
<row characterID="999999" corporationID="999999" corporationName="filler data" name="someName"/>
</rowset>
</result>
<cachedUntil>2009-01-25 15:04:55</cachedUntil>
</eveapi>
My code:
class doproxy(webapp.RequestHandler):
def get(self):
apiurl = 'http://api.eve-online.com'
path = self.request.get('path');
type = self.request.get('type');
args = '&'+self.request.get('args');
#assemble api url
url = apiurl+path
#do GET request
if type == 'get':
result = urlfetch.fetch(url,'','get');
#do POST request
if type == 'post':
result = urlfetch.fetch(url,args,'post');
if result.status_code == 200:
dom = minidom.parseString( result.content ) #.encode( "utf-8" ) )
dom2json = simplejson.dump(dom,"utf-8")
I'm quickly coming to the opinion that
Python is potentially a great
language, but that none of its users
know how to actually document anything
in a clear and concise way.
The attitude of the question isn't going to help with getting answers from these same Python users.
As is mentioned in the answers to this related question, there is no 1-to-1 correspondence between XML and JSON so the conversion can't be done automatically.
In the documentation for simplejson you can find the list of types that it's able to serialize, which are basically the native Python types (dict, list, unicode, int, float, True/False, None).
So, you have to create a Python data structure containing only these types, which you will then give to simplejson.dump().

Categories

Resources