Django error: conversion from bytes to Decimal is not supported - python

So I have the following model
class Stock(models.Model):
name = models.CharField(max_length=50)
unit_measure = models.CharField(max_length=10)
unit_price = models.DecimalField(max_digits=10, decimal_places=2)
When I try to add an instance of that model in Django's admin site, it gives me the following error
(<class 'TypeError'>, TypeError('conversion from bytes to Decimal is not supported',))
Exception Location: /Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/PyMySQL-0.5-py3.3.egg/pymysql/connections.py in defaulterrorhandler, line 209
But the data was inserted into the table successfully as I look up my database using phpmyadmin.
I am using Django1.5+Python3.3+MySQL5.5+PyMySQL
Anybody have ideas about what has gone wrong here?

After 11 months I hope the original poster found a workaround (better than switching to python 2).
The OP did not list his db connection string, but maybe he was using the "use_unicode=0" setting for his connection?
I was, and I hit the same type conversion error recently. Seems like it should be possible to convert from a byte string to a Decimal, maybe that is on someone's todo list :), but until then I can share what worked around the problem for me:
When connecting to mysql (through pymysql and python 3.4.1) set the charset=utf8 property (assuming you want that property, which you probably should) but do NOT set the use_unicode=0 property. I set that property on the advice of the current (0.9) sqlalchemy docs which said it would be "much faster." Faster but broken ain't an improvement :(. Maybe that advice was intended only for python2.x users? It's a bit confusing given how pymysql tries to be hot-swappable MySqlDB for python 3.x, but python's unicode & string handling has changed between 2.x and 3.x so...
Without diving deep into pymysql, I assume that with python3 "use_unicode" means that char fields are returned as python native (unicode) strings rather than "byte strings", with contents encoded as utf8. Set "use_unicode=0" and you get byte strings and thus the TypeError.
Anway, this works for me; hope this helps someone else who sees this error.

I had the same problem in SQLite 3, the solution I have found was mentioned in a Book (https://books.google.de/books?id=eLKdDwAAQBAJ&lpg=PA394&ots=xBGSOLY4Ue&dq=python%20sqlite%20byte%20to%20decimal&hl=de&pg=PA394#v=onepage&q&f=false)
def convert_decimal(bytes)
return decimal.Decimal(bytes.decode())

Related

Django with Oracle DB - ORA-19011: Character string buffer too small

I have the following model for an Oracle database, which is not a part of my Django project:
class ResultsData(models.Model):
RESULT_DATA_ID = models.IntegerField(primary_key=True, db_column="RESULT_DATA_ID")
RESULT_XML = models.TextField(blank=True, null=True, db_column="RESULT_XML")
class Meta:
managed = False
db_table = '"schema_name"."results_data"'
The RESULT_XML field in the database itself is declared as XMLField. I chose to represent it as TextField in Django model, due to no character limit.
When I do try to download some data with that model, I get the following error:
DatabaseError: ORA-19011: Character string buffer too small
I figure, it is because of the volume of data stored in RESULT_XML field, since when I try to just pull a record with .values("RESULT_DATA_ID"), it pulls fine.
Any ideas on how I can work around this problem? Googling for answers did not yield anything so far.
UPDATED ANSWER
I have found a much better way of dealing with that issue - I wrote a custom field value Transform object, which generates an Oracle SQL query I was after:
OracleTransforms.py
from django.db.models import TextField
from django.db.models.lookups import Transform
class CLOBVAL(Transform):
'''
Oracle-specific transform for XMLType field, which returns string data exceeding
buffer size (ORA-19011: Character string buffer too small) as a character LOB type.
'''
function = None
lookup_name = 'clobval'
def as_oracle(self, compiler, connection, **extra_context):
return super().as_sql(
compiler, connection,
template='(%(expressions)s).GETCLOBVAL()',
**extra_context
)
# Needed for CLOBVAL to work as a .values('field_name__clobval') lookup in Django ORM queries
TextField.register_lookup(CLOBVAL)
With the above, I can now just write a query as follows:
from .OracleTransforms import CLOBVAL
ResultsData.objects.filter(RESULT_DATA_ID=some_id).values('RESULT_DATA_ID', 'RESULT_XML__clobval')
or
ResultsData.objects.filter(RESULT_DATA_ID=some_id).values('RESULT_DATA_ID', XML = CLOBVAL('RESULT_XML'))
This is the best solution for me, as I do get to keep using QuerySet, instead of RawQuerySet.
The only limitation I see with this solution for now, is that I need to always specify .values(CLOBVAL('RESULT_XML')) in my ORM queries, or Oracle DB will report ORA-19011 again, but I guess this still is a good outcome.
OLD ANSWER
So, I have found a way around the problem, thanks to Christopher Jones suggestion.
ORA-19011 is an error which Oracle DB replies with, when the amount of data it would be sending back as a string exceeds allowed buffer. Therefore, it needs to be sent back as a character LOB object instead.
Django does not have a direct support for that Oracle-specific method (at least I did not find one), so an answer to the problem was a raw Django query:
query = 'select a.RESULT_DATA_ID, a.RESULT_XML.getClobVal() as RESULT_XML FROM SCHEMA_NAME.RESULTS_DATA a WHERE a.RESULT_DATA_ID=%s'
data = ResultsData.objects.raw(query, [id])
This way, you get back a RawQuerySet, which if this less known, less liked cousin of Django's QuerySet. You can iterate through the answer, and RESULT_XML will contain a LOB field, which when interrogated will convert to a String type.
Handling a String type-encoded XML data is problematic, so I also employed XMLTODICT Python package, to get it into a bit more civilized shape.
Next, I should probably look for a way to modify Django's getter for the RESULT_XML field only, and have it generate a query to Oracle DB with .getClobVal() method in it, but I will touch on that in a different StackOverflow question: Django - custom getter for 1 field in model

Confused about encoding issue when read from mysql via python code

There is one row in Mysql table as following:
1000, Intel® Rapid Storage Technology
The table's charset='utf8' when was created.
When I used python code to read it, it become the following:
Intel® Management Engine Firmware
My python code as following:
db = MySQLdb.connect(db,user,passwd,dbName,port,charset='utf8')
The weird thing was that when I removed the charset='utf8' as following:
db = MySQLdb.connect(db,user,passwd,dbName,port), the result become correct.
Why when I indicated charset='utf8' in my code, but got wrong result please?
Have you tried leaving off the charset in the connect string and then setting afterwards?
db = MySQLdb.connect(db,user,passwd,dbName,port)
db.set_character_set('utf8')
When trying to use utf8/utf8mb4, if you see Mojibake, check the following.
This discussion also applies to Double Encoding, which is not necessarily visible.
The bytes to be stored need to be utf8-encoded.
The connection when INSERTing and SELECTing text needs to specify utf8 or utf8mb4.
The column needs to be declared CHARACTER SET utf8 (or utf8mb4).
HTML should start with <meta charset=UTF-8>.
See also Python notes

Django 1.9 JSONfield stored dictionary returns unicode instead

We've just upgraded to Django 1.9 and moved things to its built-in JSONfield, which we use to store a dictionary. However, when I try to read data from it, it returns unicode of the dictionary instead.
My JSONfield is defined as:
class SmsInfo(models.Model):
[...]
json = JSONField(default=dict)
Data is written to it by:
params = dict(request.POST)
SmsInfo.objects.create([...], json=params, [...])
It is later read in this way:
incoming_smsses = SmsInfo.objects.select_related('game').defer('game__serialized').filter([...])
At which point:
print incoming_smsses[0].json.__class__
returns
<type 'unicode'>
instead of the dict I am expecting and my code crashes because it can't look up any keys.
I've been stuck on this for quite a bit, and I can't figure out why this is going wrong. I've used literal_eval as a workaround for now, which turns the unicode back into a dict. That works for now, but I'd rather tackle this at the source!
Why is my dictionary being turned to unicode here?
I just went through upgrading from a third-party JSONField to the native postgres JSONField and found through psql that the column type is still text.
So on psql, confirm your column type:
\d+ table_name
And alter the column if it's still text:
ALTER TABLE table_name ALTER COLUMN column_name TYPE jsonb USING column_name::jsonb;
As suggested by erickw in comments, this has been filed as a bug: https://code.djangoproject.com/ticket/27675
If you happened to use django-jsonfield before, there is a conflict between them, thus as the bug above suggests, the solution is to completely delete and remake the migration files of the app which uses the jsonfield.
In that case, apparently you would like to uninstall django-jsonfield as well.
I'm using Django 1.11 and postgres 11.4.
Passing list of python dicts to JSONField worked for me:
data_python = []
for i in range(3):
entry = {
'field1': value1,
'field2': 999,
'field3': 'aaaaaaaaaa',
'field4': 'never'
}
data_python.append(entry)
MyModel.objects.create(data=data_python, name='DbEntry1')
My guess is that for dicts this should work as well
And my model is:
class MetersWithNoReadings(models.Model):
created_datetime = models.DateTimeField(auto_now_add=True)
updated_datetime = models.DateTimeField(auto_now=True)
name = models.CharField(max_length=25)
data = JSONField()
As #stelios points out, this is a bug with django-jsonfield which is not compatible with the native version.
Either:
uninstall django-jsonfield (if it is no longer required as a project dependency, you can check with pipdeptree), or
upgrade to django-jsonfield >= 1.3.0 since the issue is now closed and the fix merged.
Seems related to DB storage here. Still this JSONField acts as a validator for proper JSON formatting.
However, you can hack around and load a dict from this returned unicode string.
Try as follow :
import json
data = json.loads(incoming_smsses[0].json)
Then you can access it as a dict IMO.
You need to use native postgres JSONField
from django.contrib.postgres import fields
class Some_class(models.Model):
json = fields.JSONField()

Django MongoDB Engine InvalidID

When using the python shell with the following:
>>> User.objects.get(pk=1)
I get the following error:
InvalidId: AutoField (default primary key) values must be strings
representing an ObjectId on MongoDB (got u'1' instead)
A possible solution to this problem, which did not work for me, may be found here: http://django-mongodb.org/troubleshooting.html
I'm wondering if anybody else has come across this problem and how were you able to fix it?
MongoDB doesn't use integer primary keys.

Read uniqueidentifier field from MSSQL using python

I used pyodbc to access my MSSQL database.
When reading uniqueidentifier field from MSSQL, in my MacOS, I can print the correct value of udid field (e.g 4C444660-6003-13CE-CBD5-8478B3C9C984), however when I run the same code on Linux CentOS, i just see very strange string like "???E??6??????c", and the type of value is "buffer", not "str" as in MacOS.
Could you explain me why it is and how can i get correct value of uidi on linux? Thanks
In linux i use str(uuid.UUID(bytes_le=value)).upper() to get string like 4C444660-6003-13CE-CBD5-8478B3C9C984 of uniqueidentifier field
This is a few years old, but I've had to tackle this same problem recently. My solution was to simply CAST the unique identifier as a VARCHAR, which kept my Python code nice and tidy:
SELECT CAST(unique_id_column AS VARCHAR(36)) AS my_id FROM...
Then in Python, simply output row.my_id.

Categories

Resources