can't use pony orm on sqlite3 blob fields - python

Just trying some basic exercises with pony ORM (and python3.5, sqlite3).
I just want to print a select query of some data I have without further processing to start with. Pony orm does not seem to like that at all....
The sqlite db dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE sums (t text, path BLOB, name BLOB, sum text, primary key (path,name));
INSERT INTO "sums" VALUES('directory','','','');
INSERT INTO "sums" VALUES('file','','sums-backup-f.db','6859b35f9f026317c5df48932f9f2a91');
INSERT INTO "sums" VALUES('file','','md5-tree.py','c7af81d4aad9d00e88db7af950c264c2');
INSERT INTO "sums" VALUES('file','','test.db','a403e9b46e54d6ece851881a895b1953');
INSERT INTO "sums" VALUES('file','','sirius-alexa.db','22a20434cec550a83c675acd849002fa');
INSERT INTO "sums" VALUES('file','','sums-reseau-y.db','1021614f692b5d7bdeef2a45b6b1af5b');
INSERT INTO "sums" VALUES('file','','.md5-tree.py.swp','1c3c195b679e99ef18b3d46044f6e6c5');
INSERT INTO "sums" VALUES('file','','compare-md5.py','cfb4a5b3c7c4e62346aa5e1affef210a');
INSERT INTO "sums" VALUES('file','','charles.local.db','9c50689e8185e5a79fd9077c14636405');
COMMIT;
Here is the code I try to run on python3.5 interactive shell:
from pony.orm import *
db = Database()
class File(db.Entity) :
_table_ = 'sums'
t = Required(str)
path = Required(bytes)
name = Required(bytes)
sum = Required(str)
PrimaryKey(path,name)
db.bind('sqlite','/some/edited/path/test.db')
db.generate_mapping()
File.select().show()
And it fails like this :
Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 5149, in _fetch
try: result = cache.query_results[query_key]
KeyError: (('f', 0, ()), (<pony.orm.ormtypes.SetType object at 0x7fd2d2701708>,), False, None, None, None, False, False, False, ())
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 2, in show
File "/usr/lib/python3.5/site-packages/pony/utils/utils.py", line 75, in cut_traceback
raise exc # Set "pony.options.CUT_TRACEBACK = False" to see full traceback
File "/usr/lib/python3.5/site-packages/pony/utils/utils.py", line 60, in cut_traceback
try: return func(*args, **kwargs)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 5256, in show
query._fetch().show(width)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 5155, in _fetch
used_attrs=translator.get_used_attrs())
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 3859, in _fetch_objects
real_entity_subclass, pkval, avdict = entity._parse_row_(row, attr_offsets)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 3889, in _parse_row_
avdict[attr] = attr.parse_value(row, offsets)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 1922, in parse_value
val = attr.validate(row[offset], None, attr.entity, from_db=True)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 2218, in validate
val = Attribute.validate(attr, val, obj, entity, from_db)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 1894, in validate
if from_db: return converter.sql2py(val)
File "/usr/lib/python3.5/site-packages/pony/orm/dbapiprovider.py", line 619, in sql2py
if not isinstance(val, buffer): val = buffer(val)
TypeError: string argument without an encoding
Am I using this wrong, or is this a bug ? I don't mind go filing a bug, but it's the first time I'm using this orm, so I thought it might be better to check first ...

SQLite has a (mis)feature, which allows a column to store an arbitrary value disregarding the column type. Instead of rigid data type, each SQLite column has an affinity, while each value has a storage class which can be different within the same column. For example, you can store text value inside an integer column, and vice versa. See Datatypes In SQLite Version 3 for more information.
The reason for the error is that the table contains values of "wrong" type in its BLOB columns. Correct SQLite binary literal looks like x'abcdef'. The INSERT commands that you use insert UTF8 strings instead.
This problem was somewhat fixed in the latest version of Pony which you can take from GitHub. Now if Pony receives a string value from a BLOB column it just keep that value without throwing an exception.
If you populate the table with Pony, it will writes BLOB data as a correct binary values, so it can read them later without any problem.

Related

Delete and recreate table in Apache Ignite

I am having a problem when I want to DROP a table and recreate it in APACHE IGNITE;
I am using a combination of REST API and PyIgnite to perform the operations.
IGNITE says the table do not exists, however it does not let me recreate it saying that it exists
>>> DROP_QUERY_ALERT="DROP TABLE alerts"
>>> client.sql(DROP_QUERY_ALERT)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pyignite/client.py", line 404, in sql
raise SQLError(result.message)
pyignite.exceptions.SQLError: Table doesn't exist: ALERTS
>>> CREATE_ALERT_QUERY = '''CREATE TABLE storage.alerts (
... id VARCHAR PRIMARY KEY,
... name VARCHAR,
... address_field VARCHAR,
... create_on TIMESTAMP,
... integration VARCHAR,
... alert VARCHAR,
... ) WITH "CACHE_NAME=storage"'''
>>> client.sql(CREATE_ALERT_QUERY)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pyignite/client.py", line 404, in sql
raise SQLError(result.message)
pyignite.exceptions.SQLError: Table already exists: ALERTS
>>>
If I try to make a query, it also fails:
>>> N_ALERT_QUERY = '''SELECT * FROM alerts'''
>>> result = client.sql(N_ALERT_QUERY, include_field_names=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pyignite/client.py", line 404, in sql
raise SQLError(result.message)
pyignite.exceptions.SQLError: Failed to parse query. Table "ALERTS" not found; SQL statement:
SELECT * FROM alerts [42102-197]
>>>
I am lost since this seemed to work before, but now I am unable to continue.
Is this a bug, a known behavior? Am I missing something?
Thank you.
It may be a known behavior:
Note, however, that the cache we create can not be dropped with DDL
command. … It should be deleted as any other key-value cache.
After some search and trying, I finally found that there was indeed a table by executing the following query:
SHOW_TABLES_QUERY="SELECT * FROM INFORMATION_SCHEMA.TABLES"
It turns out that IGNITE do not drop a table if it has at least a records, as it was in this case (http://apache-ignite-users.70518.x6.nabble.com/Table-not-getting-dropped-td27957.html).
I deleted the records, and dropped the table.
It took some minutes, but then I was able to recreate the table.
Some of the confusion in my case was related to the fact that TABLE_NAME should have been replaced with <cachename>.TABLE_NAME when performing the drop query:
DROP_QUERY_ALERT="DROP TABLE storage.alerts"

psql cast parse error during cursor.fetchall()

I have a python code which queries psql and returns a batch of results using cursor.fetchall().
It throws an exception and fails the process if a casting fails, due to bad data in the DB.
I get this exception:
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 377, in fetchall
return [self._build_row() for _ in xrange(size)]
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 891, in _build_row
self._casts[i], val, length, self)
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 71, in typecast
return caster.cast(value, cursor, length)
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 39, in cast
return self.caster(value, length, cursor)
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 311, in parse_date
raise DataError("bad datetime: '%s'" % bytes_to_ascii(value))
DataError: bad datetime: '32014-03-03'
Is there a way to tell the caster to ignore this error and parse this as a string instead of failing the entire batch?
You can "hack" the parser of psycopg2cffi to return DATE objects as strings instead:
If you look in the code you can see the registration of the DATE parser, so you can replace the serializer of DATE in your code.
import psycopg2cffi
psycopg2cffi._impl.typecasts._default_type('DATE', [1082],
psycopg2cffi._impl.typecasts.parse_string)
Of course this can be done, for every type.
change your psql query to cast and get the date column as string
e.g. select date_column_name:: to_char from table_name.

select a single column from Mysql DB using sqlalchemy

How do I get values from a single column using sqlalchemy?
In MySQL
select id from request r where r.product_id = 1;
In Python
request = meta.tables['request']
request.select(request.c.product_id==1).execute().rowcount
27L
>>> request.select([request.c.id]).where(request.c.product_id==1).execute()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "build/bdist.freebsd-6.3-RELEASE-i386/egg/sqlalchemy/sql/expression.py", line 2616, in select
File "build/bdist.freebsd-6.3-RELEASE-i386/egg/sqlalchemy/sql/expression.py", line 305, in select
File "build/bdist.freebsd-6.3-RELEASE-i386/egg/sqlalchemy/sql/expression.py", line 5196, in __init__
File "build/bdist.freebsd-6.3-RELEASE-i386/egg/sqlalchemy/sql/expression.py", line 1517, in _literal_as_text
sqlalchemy.exc.ArgumentError: SQL expression object or string expected.
I found the answer, I have to use the general select vs the table select.
Leaving this incase more folks find it useful.
conn = engine.connect()
stmt = select([request.c.id]).where(request.c.product_id==1)
conn.execute(stmt).rowcount
27L

SQLAlchemy returns an integer

I am accessing a database using SQLAlchemy. When I try to filter the table using a bunch of public and private keys I get an Attribute error saying 'int' object has no attribute 'date'.
Sometimes, I am able to filter the results once and when the filter is called again, it crashes giving me the same error. Is this the problem of SQLAlchemy or PyDev?
Below is the snippet of my filter.
randomize_query(session('test').query(tableName).filter(tableName.field1 == criteria, tableName.field2 == 2).order_by(desc(tableName.field3))).first()
The full traceback is as below
File "C:\Python27\lib\site-packages\sqlalchemy\orm\query.py", line 2145, in first
ret = list(self[0:1])
File "C:\Python27\lib\site-packages\sqlalchemy\orm\query.py", line 2012, in __getitem__
return list(res)
File "C:\Python27\lib\site-packages\sqlalchemy\orm\loading.py", line 72, in instances
rows = [process[0](row, None) for row in fetch]
File "C:\Python27\lib\site-packages\sqlalchemy\orm\loading.py", line 447, in _instance
populate_state(state, dict_, row, isnew, only_load_props)
File "C:\Python27\lib\site-packages\sqlalchemy\orm\loading.py", line 301, in populate_state
populator(state, dict_, row)
File "C:\Python27\lib\site-packages\sqlalchemy\orm\strategies.py", line 150, in fetch_col
dict_[key] = row[col]
File "C:\Python27\lib\site-packages\sqlalchemy\engine\result.py", line 89, in __getitem__
return processor(self._row[index])
File "C:\Python27\lib\site-packages\sqlalchemy\dialects\oracle\cx_oracle.py", line 250, in process
return value.date()
AttributeError: 'int' object has no attribute 'date'
The exception is thrown when the result set is loaded and SQLAlchemy wants to populate the result objects. One column is qualified as a Date type, but the Oracle result set is giving you an integer instead.
The cx_Oracle library will normally convert Oracle-supplied native DATE column value into Python datetime.datetime object. However, this is not happening for all your rows here.
You'll need to narrow down what row or rows have a column that is not being translated to a datetime object. Find a pattern in the filters that include or exclude these rows and narrow it down so you can inspect the database rows by hand in a different client.

Web2py Query Legacy Database

I have a legacy database called my_legacy_db which is separate from the normal db.
my_legacy_db
users
- email
- username
- name
So cliff, your first part would work to generate field names and put everything in a dict to build the query's. The problem is when I do this query:
db().select(my_legacy_db.users)
I get this error:
In [20] : db().select(my_legacy_db.users)
Traceback (most recent call last):
File "/opt/web-apps/web2py/gluon/contrib/shell.py", line 233, in run
exec compiled in statement_module.__dict__
File "<string>", line 1, in <module>
File "/opt/web-apps/web2py/gluon/dal.py", line 7578, in select
return adapter.select(self.query,fields,attributes)
File "/opt/web-apps/web2py/gluon/dal.py", line 1307, in select
sql = self._select(query, fields, attributes)
File "/opt/web-apps/web2py/gluon/dal.py", line 1196, in _select
raise SyntaxError, 'Set: no tables selected'
SyntaxError: Set: no tables selected
In [21] : print (flickr_db.users)
users
In [22] : print flickr_db
<DAL {'_migrate_enabled': True, '_lastsql': "SET sql_mode='NO_BACKSLASH_ESCAPES';", '_db_codec': 'UTF-8', '_timings': [('SET FOREIGN_KEY_CHECKS=1;', 0.0002460479736328125), ("SET sql_mode='NO_BACKSLASH_ESCAPES';", 0.00025606155395507812)], '_fake_migrate': False, '_dbname': 'mysql', '_request_tenant': 'request_tenant', '_adapter': <gluon.dal.MySQLAdapter object at 0x91375ac>, '_tables': ['users'], '_pending_references': {}, '_fake_migrate_all': False, 'check_reserved': None, '_uri': 'mysql://CENSORED', 'users': <Table 'username': <gluon.dal.Field object at 0x9137b6c>, '_db': <DAL {...}>, 'cycled': <gluon.dal.Field object at 0x94d0b8c>, 'id': <gluon.dal.Field object at 0x95054ac>, 'ALL': <gluon.dal.SQLALL object at 0x969a7ac>, '_sequence_name': 'users_sequence', 'name': <gluon.dal.Field object at 0x9137ecc>, '_referenced_by': [], '_singular': 'Users', '_common_filter': None, '_id': <gluon.dal.Field object at 0x95054ac>}>, '_referee_name': '%(table)s', '_migrate': True, '_pool_size': 0, '_common_fields': [], '_uri_hash': 'dfb3272fc537e3339819a1549180722e'}>
Am I doing something wrong here? Is the legacy db not built in /databases right? Thanks in advance for any help.
UPDATE: I tried as anthony suggested in the model shell:
In [3] : db(my_legacy_db.users).select()
Traceback (most recent call last):
File "/opt/web-apps/web2py/gluon/contrib/shell.py", line 233, in run
exec compiled in statement_module.__dict__
File "<string>", line 1, in <module>
File "/opt/web-apps/web2py/gluon/dal.py", line 7577, in select
fields = adapter.expand_all(fields, adapter.tables(self.query))
File "/opt/web-apps/web2py/gluon/dal.py", line 1172, in expand_all
for field in self.db[table]:
File "/opt/web-apps/web2py/gluon/dal.py", line 6337, in __getitem__
return dict.__getitem__(self, str(key))
KeyError: 'users'
Now I know that users is defined in my_legacy_db, and all syntax is correct. Is this an error that is there because the db files aren't generating correctly? Or am I still doing something wrong with the select syntax?
If "users" is the name of a table and you want to select all records and all fields, you would do:
db(my_legacy_db.users).select()
The query goes inside db(), not inside select() (select() is where you list the fields you want returned, or leave it empty if you want all fields). Note, in the above line, my_legacy_db.users is not actually a query but just a table -- that's a shortcut to tell web2py you want all records in the table.
You could also do:
db().select(my_legacy_db.users.ALL)
That indicates you want all fields, and by excluding the query, it assumes you want all records in the table.
See the book for more details.

Categories

Resources