from django.contrib.auth.models import User as DjangoUser
class Ward(models.Model):
user = models.ForeignKey(DjangoUser, related_name='wards')
group = models.ForeignKey(Group, related_name='wards')
This is my django model and I use this filter.
Group.objects.filter(wards__user=_user).all()
I used this code in sqlite3, it works well.
But, it doesn't work in PostgreSQL.
operator does not exist: character varying = integer
LINE 1: ...rchive_ward"."group_id" ) WHERE "archive_ward"."user_id" = 1
I think it is caused by user_id field in archive_ward tables.
I found this field's data type is character.varying(20).
What can I do for this code?
Try removing the user table in the database and adding it again.
create a new one from scratch. Syncing database again will work..
or else You can do like this Way raw_query
You cannot compare an integer with a varchar. PostgreSQL is strict and does not do any magic typecasting for you. I'm guessing SQLServer does typecasting automagically (which is a bad thing).
If you want to compare these two different beasts, you will have to cast one to the other using the casting syntax ::
The Postgres error means you're comparing an integer to a string:
operator does not exist: character varying = integer
You could change the database model so user_id is of an integer type. Or you could cast the integer to string in Python:
Group.objects.filter(wards__user=str(_user)).all()
Related
I have the following model for an Oracle database, which is not a part of my Django project:
class ResultsData(models.Model):
RESULT_DATA_ID = models.IntegerField(primary_key=True, db_column="RESULT_DATA_ID")
RESULT_XML = models.TextField(blank=True, null=True, db_column="RESULT_XML")
class Meta:
managed = False
db_table = '"schema_name"."results_data"'
The RESULT_XML field in the database itself is declared as XMLField. I chose to represent it as TextField in Django model, due to no character limit.
When I do try to download some data with that model, I get the following error:
DatabaseError: ORA-19011: Character string buffer too small
I figure, it is because of the volume of data stored in RESULT_XML field, since when I try to just pull a record with .values("RESULT_DATA_ID"), it pulls fine.
Any ideas on how I can work around this problem? Googling for answers did not yield anything so far.
UPDATED ANSWER
I have found a much better way of dealing with that issue - I wrote a custom field value Transform object, which generates an Oracle SQL query I was after:
OracleTransforms.py
from django.db.models import TextField
from django.db.models.lookups import Transform
class CLOBVAL(Transform):
'''
Oracle-specific transform for XMLType field, which returns string data exceeding
buffer size (ORA-19011: Character string buffer too small) as a character LOB type.
'''
function = None
lookup_name = 'clobval'
def as_oracle(self, compiler, connection, **extra_context):
return super().as_sql(
compiler, connection,
template='(%(expressions)s).GETCLOBVAL()',
**extra_context
)
# Needed for CLOBVAL to work as a .values('field_name__clobval') lookup in Django ORM queries
TextField.register_lookup(CLOBVAL)
With the above, I can now just write a query as follows:
from .OracleTransforms import CLOBVAL
ResultsData.objects.filter(RESULT_DATA_ID=some_id).values('RESULT_DATA_ID', 'RESULT_XML__clobval')
or
ResultsData.objects.filter(RESULT_DATA_ID=some_id).values('RESULT_DATA_ID', XML = CLOBVAL('RESULT_XML'))
This is the best solution for me, as I do get to keep using QuerySet, instead of RawQuerySet.
The only limitation I see with this solution for now, is that I need to always specify .values(CLOBVAL('RESULT_XML')) in my ORM queries, or Oracle DB will report ORA-19011 again, but I guess this still is a good outcome.
OLD ANSWER
So, I have found a way around the problem, thanks to Christopher Jones suggestion.
ORA-19011 is an error which Oracle DB replies with, when the amount of data it would be sending back as a string exceeds allowed buffer. Therefore, it needs to be sent back as a character LOB object instead.
Django does not have a direct support for that Oracle-specific method (at least I did not find one), so an answer to the problem was a raw Django query:
query = 'select a.RESULT_DATA_ID, a.RESULT_XML.getClobVal() as RESULT_XML FROM SCHEMA_NAME.RESULTS_DATA a WHERE a.RESULT_DATA_ID=%s'
data = ResultsData.objects.raw(query, [id])
This way, you get back a RawQuerySet, which if this less known, less liked cousin of Django's QuerySet. You can iterate through the answer, and RESULT_XML will contain a LOB field, which when interrogated will convert to a String type.
Handling a String type-encoded XML data is problematic, so I also employed XMLTODICT Python package, to get it into a bit more civilized shape.
Next, I should probably look for a way to modify Django's getter for the RESULT_XML field only, and have it generate a query to Oracle DB with .getClobVal() method in it, but I will touch on that in a different StackOverflow question: Django - custom getter for 1 field in model
I need to generate unique account ID for each user.(only numeric)
UUID can't solve this problem, pls help me!
Here you go
import random
import string
''.join(random.choice(string.digits) for _ in range(8))
Even shorter with python 3.6 using random.choices()
import random
import string
''.join(random.choices(string.digits, k=8))
Avoid Possible Collision:
Try creating a new object with generated id except integrity error, create id again.
eg. -
def create_obj():
id = ''.join(random.choices(string.digits, k=8))
try:
MyModel.objects.create(id=id)
except IntegrityError:
create_obj()
OR
def create_unique_id():
return ''.join(random.choices(string.digits, k=8))
def create_object():
id = create_unique_id()
unique = False
while not unique:
if not MyModel.objects.get(pk=id):
unique = True
else:
id = create_unique_id()
MyModel.objects.create(id=id)
Thanks to #WillemVanOnsem for pointing out the chances of generating duplicate id, the two examples I provided will create a new id as many times as required to get an unique id, but as the number of rows in your database increase the time to get a unique id will grow more and more and a time will come when there are so many records in your database(10^8) when creation of new record is not possible with a 8-digit uid as all possible combination already exists then you will be stuck in an infinite loop while trying to create a new object.
If the stats provided my Willem is correct, I say the changes are too high of a collision. So I would recommend not to create id's yourself, go with django's default auto field or uuid which guarantees uniqueness through space and time.
Assuming you are using MYSQL and your comment said you didn't use database PK because they start with 1, 2...
Why not just make PK starts with your range?
eg.
ALTER TABLE user AUTO_INCREMENT = 10000000;
And you can put this into your custom migration, see manage.py makemigrations --empty
I presume other databases have the similar approach as well
For django app I would suggest using get_random_string from django.utils.crypto package. Internally python secrets.choice is used. This way you will not have to change code, if secrets interface changes.
from django.utils.crypto import get_random_string
def generate_account_id():
return get_random_string(8, allowed_chars='0123456789')
django.utils.crypto
If you work with python >= 3.6 , the alternative is secrets.token_hex(nbytes) which returns string of hex value, then convert the string to number. To further detect collision you can also check whether any instance with the ID already exists in your Django model (as shown in the accepted answer)
code example :
import secrets
hexstr = secrets.token_hex(4)
your_id = int(hexstr, 16)
I'm building a platform with a PostgreSQL database (first time) but I've experience with Oracle and MySQL databases for a few years now.
My question is about the UUID data type in Postgres.
I am using an UUIDv4 uuid to indentify a record in multiple tables, so the request to /users/2df2ab0c-bf4c-4eb5-9119-c37aa6c6b172 will respond with the user that has that UUID. I also have an auto increment ID field for indexing.
My query is just a select with a where clause on UUID. But when the user enters an invalid UUID like this 2df2ab0c-bf4c-4eb5-9119-c37aa6c6b17 (without the last 2) then the database responds with this error: Invalid input syntax for UUID.
I was wondering why it returned this because when you select on a integer-type with a string-type it does work.
Now I need to set a middleware/check on each route that has an UUID-type parameter in it because otherwise the server would crash.
Btw I'm using Flask 0.12 (Python) and PostgreSQL 9.6
UUID as defined by RFC 4122, ISO/IEC 9834-8:2005... is a 128-bit quantity ... written as a sequence of lower-case hexadecimal digits... for a total of 32 digits representing the 128 bits. (Postgresql Docs)
There is no conversion from a 31 hex digits text to a 128-bit UUID (sorry). You have some options:
Convert to ::text on your query (not really recommended, because you'd be converting every row, every time).
SELECT * FROM my_table WHERE my_uuid::TEXT = 'invalid uid';
Don't store it as a UUID type. If you don't want / need UUID semantics, store it as a varchar.
Check your customer input. (My recommendation). Conceptually, this is no different from asking for someone's age and getting 'ABC' as the response.
Postgres allows upper/lower case, and is flexible about use of hyphens, so a pre-check is simply strip the hyphens, lowercase, count [0-9[a-f] & if == 32, you have a workable UUID. Otherwise, rather than telling your user "not found", you can tell them, "not a UUID", which is probably more user-friendly.
The database is throwing an error because you're trying to match in a UUID-type column with a query that doesn't contain a valid UUID. This doesn't happen with integer or string queries because leaving off the last character of those does result in a valid integer or string, just not the one you probably intended.
You can either prevent passing invalid UUIDs to the database by validating your input (which you should be doing anyway for other reasons) or somehow trap on this error. Either way, you'll need to present a human-readable error message back to the user.
Also consider whether users should be typing in URLs with UUIDs in the first place, which isn't very user-friendly; if they're just clicking links rather than typing them, as usually happens, then how did that error even happen? There's a good chance that it's an attack of some sort, and you should respond accordingly.
I've tried two approaches and both are not working, I searched on Google and didn't find any proper solutions. My code looks like:
intField = Column(SmallInt(), length=5)
And the error says:
Unknown arguments passed to Column: ['length']
I also tried, knowing it shouldn't work, this solution:
intField = Column(SmallInt(5))
And it does not work because this SqlAlchemy datatype doesn't accept arguments.
Any ideas?
[Update]
I'm using MySQL as database engine, so the solution here is to import mysql's own Integer type, and then specify the length I want it to be.
In the above example, I would only need to do:
from sqlalchemy.dialects import mysql
Integer = mysql.INTEGER
class ...
...
intField = Column(Integer(5))
But I still wonder if there is a more generic approach?
MySQL has the DECIMAL/NUMERIC type.
Use Decimal(5, 0) to a field with 5 digits.
Use this only if you really need a number. If won't do math with this field, prefer a String(5) and validate the digits (isdigit() is your friend).
In SQLAlchemy, handle it as a Numeric field.
I am accessing Postgre database using SQLAlchemy models. In one of models I have Column with UUID type.
id = Column(UUID(as_uuid=True), default=uuid.uuid4(), nullable=False, unique=True)
and it works when I try to insert new row (generates new id).
Problem is when I try to fetch Person by id I try like
person = session.query(Person).filter(Person.id.like(some_id)).first()
some_id is string received from client
but then I get error LIKE (Programming Error) operator does not exist: uuid ~~ unknown.
How to fetch/compare UUID column in database through SQLAlchemy ?
don't use like, use =, not == (in ISO-standard SQL, = means equality).
Keep in mind that UUID's are stored in PostgreSQL as binary types, not as text strings, so LIKE makes no sense. You could probably do uuid::text LIKE ? but it would perform very poorly over large sets because you are effectively ensuring that indexes can't be used.
But = works, and is far preferable:
mydb=>select 'd796d940-687f-11e3-bbb6-88ae1de492b9'::uuid = 'd796d940-687f-11e3-bbb6-88ae1de492b9';
?column?
----------
t
(1 row)