I'm using mongodb and redis, redis is my cache.
I'm caching mongodb objects with redis-py:
obj in mongodb: {u'name': u'match', u'section_title': u'\u6d3b\u52a8', u'title':
u'\u6bd4\u8d5b', u'section_id': 1, u'_id': ObjectId('4fb1ed859b10ed2041000001'), u'id': 1}
the obj fetched from redis with hgetall(key, obj) is:
{'name': 'match', 'title': '\xe6\xaf\x94\xe8\xb5\x9b', 'section_title':
'\xe6\xb4\xbb\xe5\x8a\xa8', 'section_id': '1', '_id': '4fb1ed859b10ed2041000001', 'id': '1'}
As you can see, obj fetched from cache is str instead of unicode, so in my app, there is error s like :'ascii' codec can't decode byte 0xe6 in position 12: ordinal not in range(128)
Can anyone give some suggestions? thank u
I think I've discovered the problem. After reading this, I had to explicitly decode from redis which is a pain, but works.
I stumbled across a blog post where the author's output was all unicode strings which was obv different to mine.
Looking into the StrictRedis.__init__ there is a parameter decode_responses which by default is False. https://github.com/andymccurdy/redis-py/blob/273a47e299a499ed0053b8b90966dc2124504983/redis/client.py#L446
Pass in decode_responses=True on construct and for me this FIXES THE OP'S ISSUE.
Update, for global setting, check jmoz's answer.
If you're using third-party lib such as django-redis, you may need to specify a customized ConnectionFactory:
class DecodeConnectionFactory(redis_cache.pool.ConnectionFactory):
def get_connection(self, params):
params['decode_responses'] = True
return super(DecodeConnectionFactory, self).get_connection(self, params)
Assuming you're using redis-py, you'd better to pass str instead of unicode to Redis, or else Redis will encode it automatically for *set commands, normally in UTF-8. For the *get commands, Redis has no idea about the formal type of a value and has to just return the value in str directly.
Thus, As Denis said, the way that you storing the object to Redis is critical. You need to transform the value to str to make the Redis layer transparent for you.
Also, set the default encoding to UTF-8 instead of using ascii
for each string you can use the decode function to transform it in utf-8, e.g. for the value if the title field in your code:
In [7]: a='\xe6\xaf\x94\xe8\xb5\x9b'
In [8]: a.decode('utf8')
Out[8]: u'\u6bd4\u8d5b'
I suggest you always encode to utf-8 before writing to MongoDB or Redis (or any external system). And that you decode('utf-8') when you fecth results, so that you always work with Unicode in Python.
Related
I've got some enc_data with msgpack coming from redis.
I've set up my sentinel and connection as follow:
sentinel = Sentinel([('localhost','26379')], decode_responses=False)
conn = sentinel.master_for('foo', socket_timeout=0.5, decode_responses=False)
I used decode_responses in both places since i cant determine which one actually works.
Next I'm reading my data and decode it
enc_data = conn.get('msgpack:data:key')
data = msgpack.loads(enc_data)
What I see
data.keys()
########################################
dict_keys([b'key_0', b'key_1', b'key_2', b'key_3'])
However
print(data.get('key_0'))
#######################################
None
Could you point out what I'm doing wrong decoding this data or accessing it?
So...
class Unpacker(object):
"""Streaming unpacker.
:param bool raw:
If true, unpack msgpack raw to Python bytes (default).
Otherwise, unpack to Python str (or unicode on Python 2) by decoding
with UTF-8 encoding (recommended).
Currently, the default is true, but it will be changed to false in
near future. So you must specify it explicitly for keeping backward
compatibility.
*encoding* option which is deprecated overrides this option.
This option needs to be set to False and everything works like magic.
data = msgpack.loads(enc_data, raw=False)
I am writing a program (python Python 3.5.2) that uses a HTTPSConnection to get a JSON object as a response. I have it working using some example code, but am not sure where a method comes from.
My question is this: In the code below, the decode('utf-9') method doesn't exist in the documentation at https://docs.python.org/3.4/library/http.client.html#http.client.HTTPResponse under "21.12.2. HTTPResponse Objects". How would I know that the return value from the method "response.read()" has the method "decode('utf-8')" available?
Do Python objects inherit from a base class like C# objects do or am I missing something?
http = HTTPSConnection(get_hostname(token))
http.request('GET', uri_path, headers=get_authorization_header(token))
response = http.getresponse()
print(response.status, response.reason)
feed = json.loads(response.read().decode('utf-8'))
Thank you for your help.
The read method of the response object always returns a byte string (in Python 3, which I presume you are using as you use the print function). The byte string does indeed have a decode method, so there should be no problem with this code. Of course it makes the assumption that the response is encoded in UTF-8, which may or may not be correct.
[Technical note: email is a very difficult medium to handle: messages can be made up of different parts, each of which is differently encoded. At least with web traffic you stand a chance of reading the Content-Type header's charset attribute to find the correct encoding].
UPDATE: I opened an issue on github based on a Ivan Mainetti's suggestion. If you want to weigh in there, it is :https://github.com/orientechnologies/orientdb/issues/6757
I am working on a database based on OrienDB and using a python interface for it. I've had pretty good luck with it, but I've run into a problem that seems to be the driver's (pyorient) wonkiness when dealing with certain unicode characters.
The data structure I'm uploading to the database looks like this:
new_Node = {'#Nodes':
{
"Abs_Address":Ono.absolute_address,
'Content':Ono.content,
'Heading':Ono.heading,
'Type':Ono.type,
'Value':Ono.value
}
}
I have created literally hundreds of records flawlessly on OrientDB / pyorient. I don't think the problem is necessarily a pyorient specific question, however, as I think the reason it is failing on a particular record is because the Ono.absolute_address element has a unicode character that pyorient is somehow choking on.
The record I want to create has an Abs_address of /u/c/2/a1–2, but the node I get when I pass the value to the my data structure above is this:
{'#Nodes': {'Content': '', 'Abs_Address': u'/u/c/2/a1\u20132', 'Type': 'section', 'Heading': ' Transferred', 'Value': u'1\u20132'}}
I think that somehow my problem is python is mixing unicode and ascii strings / chars? I'm a bit new to python and not declaring types, so I'm hoping this isn't an issue with pyorient perse given that the new_Node object doesn't output the properly formatted string...? Or is this an instance of pyorient not liking unicode? I'm tearing my hair out on this one. Any help is appreciated.
In case the error is coming from pyorient and not some kind of text encoding, here's the pyorient-related info. I am creating the record using this code:
rec_position = self.pyo_client.record_create(14, new_Node)
And this is the error I'm getting:
com.orientechnologies.orient.core.storage.ORecordDuplicatedException - Cannot index record Nodes{Content:,Abs_Address:null,Type:section,Heading: Transferred,Value:null}: found duplicated key 'null' in index 'Nodes.Abs_Address' previously assigned to the record #14:558
The error is odd as it suggests that the backend database is getting a null object for the address. Apparently it did create an entry for this "address," but it's not what I want it to do. I don't know why address strings with unicode are coming up null in the database... I can create it through orientDB studio using the exact string I fed into the new_Node data structure... but I can't use python to do the same thing.
Someone help?
EDIT:
Thanks to Laurent, I've narrowed the problem down to something to do with unicode objects and pyorient. Whenever a the variable I am passing is type unicode, the pyorient adapter sends a null value to the OrientDB database. I determined the value that is causing the problem is an ndash symbol, and Laurent helped me replace it with a minus sign using this code
.replace(u"\u2013",u"-")
When I do that, however, pyorient gets unicode objects which it then passes as null values... This is not good. I can fix this short term by recasting the string using str(...) and this appears to solve my immediate problem:
str(Ono.absolute_address.replace(u"\u2013",u"-"))
. Problem is, I know I will have symbols and other unusual characters in my DB data. I know the database supports the unicode strings because I can add them manually or use SQL syntax to do what I cannot do via pyorient and python... I am assuming this is a dicey casting issue somewhere, but I'm not really sure where. This seems very similar to this problem: http://stackoverflow.duapp.com/questions/34757352/how-do-i-create-a-linked-record-in-orientdb-using-pyorient-library
Any pyorient people out there? Python gods? Lucky s0bs? =)
I have tried your example on Python 3 with the development branch of pyorient with the latest version of OrientDB 2.2.11. If I pass the values without escaping them, your example seems to work for me and I get the right values back.
So this test works:
def test_test1(self):
new_Node = {'#Nodes': {'Content': '',
'Abs_Address': '/u/c/2/a1–2',
'Type': 'section',
'Heading': ' Transferred',
'Value': u'1\u20132'}
}
self.client.record_create(14, new_Node)
result = self.client.query('SELECT * FROM V where Abs_Address="/u/c/2/a1–2"')
assert result[0].Abs_Address == '/u/c/2/a1–2'
I think you may be saving the unicode value as an escaped value and that's where things get tricky.
I don't trust replacing values myself so I usually escape the unicode values I send to orientdb with the following code:
import json
def _escape(string):
return json.dumps(string)[1:-1]
The following test would fail because the escaped value won't match the escaped value in the DB so no record will be returned:
def test_test2(self):
new_Node = {'#Nodes': {'Content': '',
'Abs_Address': _escape('/u/c/2/a1–2'),
'Type': 'section',
'Heading': ' Transferred',
'Value': u'1\u20132'}
}
self.client.record_create(14, new_Node)
result = self.client.query('SELECT * FROM V where Abs_Address="%s"' % _escape('/u/c/2/a1–2'))
assert result[0].Abs_Address.encode('UTF-8').decode('unicode_escape') == '/u/c/2/a1–2'
In order to fix this, you have to escape the value twice:
def test_test3(self):
new_Node = {'#Nodes': {'Content': '',
'Abs_Address': _escape('/u/c/2/a1–2'),
'Type': 'section',
'Heading': ' Transferred',
'Value': u'1\u20132'}
}
self.client.record_create(14, new_Node)
result = self.client.query('SELECT * FROM V where Abs_Address="%s"' % _escape(_escape('/u/c/2/a1–2')))
assert result[0].Abs_Address.encode('UTF-8').decode('unicode_escape') == '/u/c/2/a1–2'
This test will succeed because you will now be asking for the escaped value in the DB.
I have a problem that I would like to know how to efficiently tackle.
I have data that is JSON-formatted (used with dumps / loads) and contains unicode.
This is part of a protocol implemented with JSON to send messages. So messages will be sent as strings and then loaded into python dictionaries. This means that the representation, as a python dictionary, afterwards will look something like:
{u"mykey": u"myVal"}
It is no problem in itself for the system to handle such structures, but the thing happens when I'm going to make a database query to store this structure.
I'm using pyOrient towards OrientDB. The command ends up something like:
"CREATE VERTEX TestVertex SET data = {u'mykey': u'myVal'}"
Which will end up in the data field getting the following values in OrientDB:
{'_NOT_PARSED_': '_NOT_PARSED_'}
I'm assuming this problem relates to other cases as well when you wish to make a query or somehow represent a data object containing unicode.
How could I efficiently get a representation of this data, of arbitrary depth, to be able to use it in a query?
To clarify even more, this is the string the db expects:
"CREATE VERTEX TestVertex SET data = {'mykey': 'myVal'}"
If I'm simply stating the wrong problem/question and should handle it some other way, I'm very much open to suggestions. But what I want to achieve is to have an efficient way to use python2.7 to build a db-query towards orientdb (using pyorient) that specifies an arbitrary data structure. The data property being set is of the OrientDB type EMBEDDEDMAP.
Any help greatly appreciated.
EDIT1:
More explicitly stating that the first code block shows the object as a dict AFTER being dumped / loaded with json to avoid confusion.
Dargolith:
ok based on your last response it seems you are simply looking for code that will dump python expression in a way that you can control how unicode and other data types print. Here is a very simply function that provides this control. There are ways to make this function more efficient (for example, by using a string buffer rather than doing all of the recursive string concatenation happening here). Still this is a very simple function, and as it stands its execution is probably still dominated by your DB lookup.
As you can see in each of the 'if' statements, you have full control of how each data type prints.
def expr_to_str(thing):
if hasattr(thing, 'keys'):
pairs = ['%s:%s' % (expr_to_str(k),expr_to_str(v)) for k,v in thing.iteritems()]
return '{%s}' % ', '.join(pairs)
if hasattr(thing, '__setslice__'):
parts = [expr_to_str(ele) for ele in thing]
return '[%s]' % (', '.join(parts),)
if isinstance(thing, basestring):
return "'%s'" % (str(thing),)
return str(thing)
print "dumped: %s" % expr_to_str({'one': 33, 'two': [u'unicode', 'just a str', 44.44, {'hash': 'here'}]})
outputs:
dumped: {'two':['unicode', 'just a str', 44.44, {'hash':'here'}], 'one':33}
I went on to use json.dumps() as sobolevn suggested in the comment. I didn't think of that one at first since I wasn't really using json in the driver. It turned out however that json.dumps() provided exactly the formats I needed on all the data types I use. Some examples:
>>> json.dumps('test')
'"test"'
>>> json.dumps(['test1', 'test2'])
'["test1", "test2"]'
>>> json.dumps([u'test1', u'test2'])
'["test1", "test2"]'
>>> json.dumps({u'key1': u'val1', u'key2': [u'val21', 'val22', 1]})
'{"key2": ["val21", "val22", 1], "key1": "val1"}'
If you need to take more control of the format, quotes or other things regarding this conversion, see the reply by Dan Oblinger.
Background
I'm in a real mess with unicode and Python. It seems to be a common angst and I've tried using other solutions out there but I just can't get my head around it.
Setup
MySQL Database Setup
collation_database: utf8_general_ci
character_set_database: utf8
SQLAlchemy Model
class Product(Base):
id = Column('product_id', Integer, primary_key=True)
name = Column('product_name', String(64)) #Tried using Unicode() but didn't help
Pyramid View
#view_config(renderer='json', route_name='products_search')
def products_search(request):
json_products = []
term = "%%%s%%" % request.params['term']
products = dbsession.query(Product).filter(Product.name.like(term)).all()
for prod in products:
json_prod = {'id': prod.id, 'label': prod.name, 'value': prod.name, 'sku': prod.sku, 'price': str(prod.price[0].price)}
json_products.append(json_prod)
return json_products
Problem
I get encoding errors reported from the json module (which is called as its the renderer for this route) like so:
UnicodeDecodeError: 'utf8' codec can't decode byte 0x96 in position 37: invalid start byte
The culprit is a "-" (dash symbol) in the prod.name value. Full stack trace here. If the returned products don't have a "-" in then it all works fine!
Tried
I've tried encoding, decoding with various types before returning the json_products variable.
The above comment is right, but more specifially, you can replace 'label': prod.name with 'label': prod.name.decode("cp1252"). You should probably also do this for all of the strings in your json_prod dictionary since it's likely you'll see cp1252 encoded characters elsewhere in the real-world use of your application.
On that note, depending on the source of these strings and how widely that source is used in your app, you may run into such problems elsewhere in your app and generally when you least expect it. To investigate further you may want to figure What the source for these strings is and if you can do the decoding/re-encoding at a lower level to correct most future problems with this.