I've got some enc_data with msgpack coming from redis.
I've set up my sentinel and connection as follow:
sentinel = Sentinel([('localhost','26379')], decode_responses=False)
conn = sentinel.master_for('foo', socket_timeout=0.5, decode_responses=False)
I used decode_responses in both places since i cant determine which one actually works.
Next I'm reading my data and decode it
enc_data = conn.get('msgpack:data:key')
data = msgpack.loads(enc_data)
What I see
data.keys()
########################################
dict_keys([b'key_0', b'key_1', b'key_2', b'key_3'])
However
print(data.get('key_0'))
#######################################
None
Could you point out what I'm doing wrong decoding this data or accessing it?
So...
class Unpacker(object):
"""Streaming unpacker.
:param bool raw:
If true, unpack msgpack raw to Python bytes (default).
Otherwise, unpack to Python str (or unicode on Python 2) by decoding
with UTF-8 encoding (recommended).
Currently, the default is true, but it will be changed to false in
near future. So you must specify it explicitly for keeping backward
compatibility.
*encoding* option which is deprecated overrides this option.
This option needs to be set to False and everything works like magic.
data = msgpack.loads(enc_data, raw=False)
Related
I'm trying to consume a webservice with python Zeep that has a parameter of type xsd:base64Binary technical document specify type as: Byte[]
Errors are:
urllib3.exceptions.HeaderParsingError: [StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()], unparsed data: ''
and on the reply I get: Generic error "data at the root level is invalid.
I can't find the correct way to do it.
My code is:
content=open(fileName,"r").read()
encodedContent = base64.b64encode(content.encode('ascii'))
myParameter=dict(param=dict(XMLFile=encodedContent))
client.service.SendFile(**myParameter)
thanks everyone for the comments.
Mike
This is how the built-in type of Base64Binary looks like in zeep:
class Base64Binary(BuiltinType):
accepted_types = [str]
_default_qname = xsd_ns("base64Binary")
#check_no_collection
def xmlvalue(self, value):
return base64.b64encode(value)
def pythonvalue(self, value):
return base64.b64decode(value)
As you can see, it's doing the encoding and decoding by itself. You don't need to encode the file content, you have to send it as it is and zeep will encode it before putting it on the wire.
Most likely this is causing the issue. When the message element is decoded, an array of bytes is expected but another base64 string is found there.
I wrote a python script to retrieve data from a website in json format using the requests library, and then I dump it into a json file. I have written a lot of code utilizing this data and have tested it in Windows only. Recently I shifted to a Linux system, and when the same python script is executed, the order of the keys in the json file is completely different.
This is the code I'm using:
API_request = requests.get('https://www.abcd.com/datarequest')
alertJson_Data = API_request.json() # To convert returned data to json
json.dump(alertJson_Data, jsonDataFile) # for adding the json data for the alert to the file
jsonDataFile.write('\n')
jsonDataFile.close()
A lot of my other scripts depends on the ordering of the keys in this json file, so is there any way to maintain the same ordering that is used in Windows to be used in Linux as well?
For example in Windows the order is "id":, "src":, "dest":, whereas in Linux its completely different. If I directly go to the Web link on my browser, it has the same ordering as the one saved in Windows. How do I retain this ordering?
Can you use collections.OrderedDict when loading json?
e.g
from collections import OrderedDict
alertJson_Data = API_request.json(object_pairs_hook=OrderedDict)
should works, because json() method implemented on requests take the same optional arguments as json.loads
json(**kwargs)
Returns the json-encoded content of a response, if any.
Parameters **kwargs – Optional arguments that json.loads takes. Raises
ValueError – If the response body does not contain valid json.
And the doc of json.loads specify:
object_hook, if specified, will be called with the result of every
JSON object decoded and its return value will be used in place of the
given dict. This can be used to provide custom deserializations (e.g.
to support JSON-RPC class hinting).
object_pairs_hook, if specified will be called with the result of
every JSON object decoded with an ordered list of pairs. The return
value of object_pairs_hook will be used instead of the dict. This
feature can be used to implement custom decoders that rely on the
order that the key and value pairs are decoded (for example,
collections.OrderedDict() will remember the order of insertion). If
object_hook is also defined, the object_pairs_hook takes priority.
I'm wondering if anyone tell me why the 'itsdangerous' module returns the signed text as a portion of the string. As in:
>>> import itsdangerous
>>>
>>> secret = 'my-secret-key'
>>> token = itsdangerous.TimedSerializer(secret)
>>> token.dumps('my-user-id')
'"my-user-id".Cj51kA.yuoSx6eK0LuuphWK0TlOBil2PM0'
I supposed I could just do something like this to get the hash:
token.dumps('my-user-id').split('.', 1)[1]
... but I'm surprised that I would even need to do this in the first place. The fact that the documentation doesn't explicitly mention this behavior or simply offer a method to strip out the signed text makes me nervous enough to question whether I'm doing something insecure. Thanks in advance for shedding light on the following questions:
1) Is there a good reason why the library would do this?
2) What is the safest way to ensure I don't return the encoded string in plain text along with the hash?
The purpose of itsdangerous is not encrypting your data, it just a simple tool to detect tampered data.
... When you get the data back you can easily ensure that nobody
tampered with it. 1
Therefore, you should encrypt it yourself, before or after signing it by this module.
itsdangerous signs the text or any other data so it can be transmitted via unsafe channels and then checked on the other end or upon retrieval from a database where it was store that it wasn't changed/tampered.
So it creates a signature, adds it to the signed data and then checks upon retrieval that it wasn't tampered. The other side needs the data and the signature.
I'm using mongodb and redis, redis is my cache.
I'm caching mongodb objects with redis-py:
obj in mongodb: {u'name': u'match', u'section_title': u'\u6d3b\u52a8', u'title':
u'\u6bd4\u8d5b', u'section_id': 1, u'_id': ObjectId('4fb1ed859b10ed2041000001'), u'id': 1}
the obj fetched from redis with hgetall(key, obj) is:
{'name': 'match', 'title': '\xe6\xaf\x94\xe8\xb5\x9b', 'section_title':
'\xe6\xb4\xbb\xe5\x8a\xa8', 'section_id': '1', '_id': '4fb1ed859b10ed2041000001', 'id': '1'}
As you can see, obj fetched from cache is str instead of unicode, so in my app, there is error s like :'ascii' codec can't decode byte 0xe6 in position 12: ordinal not in range(128)
Can anyone give some suggestions? thank u
I think I've discovered the problem. After reading this, I had to explicitly decode from redis which is a pain, but works.
I stumbled across a blog post where the author's output was all unicode strings which was obv different to mine.
Looking into the StrictRedis.__init__ there is a parameter decode_responses which by default is False. https://github.com/andymccurdy/redis-py/blob/273a47e299a499ed0053b8b90966dc2124504983/redis/client.py#L446
Pass in decode_responses=True on construct and for me this FIXES THE OP'S ISSUE.
Update, for global setting, check jmoz's answer.
If you're using third-party lib such as django-redis, you may need to specify a customized ConnectionFactory:
class DecodeConnectionFactory(redis_cache.pool.ConnectionFactory):
def get_connection(self, params):
params['decode_responses'] = True
return super(DecodeConnectionFactory, self).get_connection(self, params)
Assuming you're using redis-py, you'd better to pass str instead of unicode to Redis, or else Redis will encode it automatically for *set commands, normally in UTF-8. For the *get commands, Redis has no idea about the formal type of a value and has to just return the value in str directly.
Thus, As Denis said, the way that you storing the object to Redis is critical. You need to transform the value to str to make the Redis layer transparent for you.
Also, set the default encoding to UTF-8 instead of using ascii
for each string you can use the decode function to transform it in utf-8, e.g. for the value if the title field in your code:
In [7]: a='\xe6\xaf\x94\xe8\xb5\x9b'
In [8]: a.decode('utf8')
Out[8]: u'\u6bd4\u8d5b'
I suggest you always encode to utf-8 before writing to MongoDB or Redis (or any external system). And that you decode('utf-8') when you fecth results, so that you always work with Unicode in Python.
I'm trying to store image data (screenshots) in SQLite database.now = int(math.floor(time.time()))
ba = QByteArray()
buff = QBuffer(ba)
image.save(buff, format)
params = (str(ba.data()), "image/%s"%format, now, url)
s_conn = sqlite.connect("cache/screenshots_%s.db"%row['size'])
s_curs = s_conn.cursor()
s_curs.execute("UPDATE screenshots SET data=?, mime=?, created=? WHERE filename=?", params)This code gives me error "TypeError: not all arguments converted during string formatting"
Any manipulation with QByteArray (incl. converting it to Qstring) gives me this error, or ascii to utf-8 conversion error.
I've Googled this issue for about 2 days and every advice was incorrect for me.
How can I work it around?
The biggest issue is that you are trying to store binary as a string by calling str(ba.data). If you do this then it will not be a valid string and will cause endless grief for you later. Behind the scenes SQLite uses Unicode for all strings. However it does not check that a provided string is valid unicode (UTF8/16). Consequently you can insert binary garbage pretending it is a string but when trying to retrieve it will fail dismally since it won't convert to Unicode.
SQLite has a binary type (named BLOB) and that is exactly what you should be using. The way you provide a binary/blob binding is dependent on the SQLite wrapper you are using. It looks like you are using PySQLite or SQLite 3. For Python 2 use buffer and for Python 3 use bytes.
# Python 2
params=( buffer(ba.data()), ...)
# Python 3
params=( bytes(ba.data()), ...)