SQLAlchemy returns an integer - python

I am accessing a database using SQLAlchemy. When I try to filter the table using a bunch of public and private keys I get an Attribute error saying 'int' object has no attribute 'date'.
Sometimes, I am able to filter the results once and when the filter is called again, it crashes giving me the same error. Is this the problem of SQLAlchemy or PyDev?
Below is the snippet of my filter.
randomize_query(session('test').query(tableName).filter(tableName.field1 == criteria, tableName.field2 == 2).order_by(desc(tableName.field3))).first()
The full traceback is as below
File "C:\Python27\lib\site-packages\sqlalchemy\orm\query.py", line 2145, in first
ret = list(self[0:1])
File "C:\Python27\lib\site-packages\sqlalchemy\orm\query.py", line 2012, in __getitem__
return list(res)
File "C:\Python27\lib\site-packages\sqlalchemy\orm\loading.py", line 72, in instances
rows = [process[0](row, None) for row in fetch]
File "C:\Python27\lib\site-packages\sqlalchemy\orm\loading.py", line 447, in _instance
populate_state(state, dict_, row, isnew, only_load_props)
File "C:\Python27\lib\site-packages\sqlalchemy\orm\loading.py", line 301, in populate_state
populator(state, dict_, row)
File "C:\Python27\lib\site-packages\sqlalchemy\orm\strategies.py", line 150, in fetch_col
dict_[key] = row[col]
File "C:\Python27\lib\site-packages\sqlalchemy\engine\result.py", line 89, in __getitem__
return processor(self._row[index])
File "C:\Python27\lib\site-packages\sqlalchemy\dialects\oracle\cx_oracle.py", line 250, in process
return value.date()
AttributeError: 'int' object has no attribute 'date'

The exception is thrown when the result set is loaded and SQLAlchemy wants to populate the result objects. One column is qualified as a Date type, but the Oracle result set is giving you an integer instead.
The cx_Oracle library will normally convert Oracle-supplied native DATE column value into Python datetime.datetime object. However, this is not happening for all your rows here.
You'll need to narrow down what row or rows have a column that is not being translated to a datetime object. Find a pattern in the filters that include or exclude these rows and narrow it down so you can inspect the database rows by hand in a different client.

Related

TypeError: 'Query' object is not iterable in Odoo 10

can anyone help me with this error as I don't find any mistake in this code line (if soline.adv_issue_ids and not soline.issue_product_ids:)?
What I'm trying to do is the following:
In the IF condition, I'm trying to get the value of the many2many (adv_issue_ids) and one2many (issue_product_ids) fields from the object sale.order.line.
Details of variable used in the code line:
soline is a sale order line recordset (ex: sale.order.line(129))
adv_issue_ids is a many2many field in sale.order.line
issue_product_ids is a one2many field in sale.order.line
Please find the error log below
File "/workspace/parts/my_module/wizard/sale_line.py", line 76, in function_name
**if soline.adv_issue_ids and not soline.issue_product_ids:**
File "/workspace/parts/odoo/odoo/fields.py", line 931, in __get__
self.determine_value(record)
File "/workspace/parts/odoo/odoo/fields.py", line 1035, in determine_value
record._prefetch_field(self)
File "/workspace/parts/odoo/odoo/models.py", line 3087, in _prefetch_field
result = records.read([f.name for f in fs], load='_classic_write')
File "/workspace/parts/odoo/odoo/models.py", line 3027, in read
self._read_from_database(stored, inherited)
File "/workspace/parts/odoo/odoo/models.py", line 3117, in _read_from_database
self._apply_ir_rules(query, 'read')
File "/workspace/parts/odoo/odoo/models.py", line 4131, in _apply_ir_rules
where_clause, where_params, tables = Rule.domain_get(self._name, mode)
**TypeError: 'Query' object is not iterable**
Thanks in advance!!!
Query object is not iterable, hence use all() function to fetch all values.
Try query.all()

can't use pony orm on sqlite3 blob fields

Just trying some basic exercises with pony ORM (and python3.5, sqlite3).
I just want to print a select query of some data I have without further processing to start with. Pony orm does not seem to like that at all....
The sqlite db dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE sums (t text, path BLOB, name BLOB, sum text, primary key (path,name));
INSERT INTO "sums" VALUES('directory','','','');
INSERT INTO "sums" VALUES('file','','sums-backup-f.db','6859b35f9f026317c5df48932f9f2a91');
INSERT INTO "sums" VALUES('file','','md5-tree.py','c7af81d4aad9d00e88db7af950c264c2');
INSERT INTO "sums" VALUES('file','','test.db','a403e9b46e54d6ece851881a895b1953');
INSERT INTO "sums" VALUES('file','','sirius-alexa.db','22a20434cec550a83c675acd849002fa');
INSERT INTO "sums" VALUES('file','','sums-reseau-y.db','1021614f692b5d7bdeef2a45b6b1af5b');
INSERT INTO "sums" VALUES('file','','.md5-tree.py.swp','1c3c195b679e99ef18b3d46044f6e6c5');
INSERT INTO "sums" VALUES('file','','compare-md5.py','cfb4a5b3c7c4e62346aa5e1affef210a');
INSERT INTO "sums" VALUES('file','','charles.local.db','9c50689e8185e5a79fd9077c14636405');
COMMIT;
Here is the code I try to run on python3.5 interactive shell:
from pony.orm import *
db = Database()
class File(db.Entity) :
_table_ = 'sums'
t = Required(str)
path = Required(bytes)
name = Required(bytes)
sum = Required(str)
PrimaryKey(path,name)
db.bind('sqlite','/some/edited/path/test.db')
db.generate_mapping()
File.select().show()
And it fails like this :
Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 5149, in _fetch
try: result = cache.query_results[query_key]
KeyError: (('f', 0, ()), (<pony.orm.ormtypes.SetType object at 0x7fd2d2701708>,), False, None, None, None, False, False, False, ())
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 2, in show
File "/usr/lib/python3.5/site-packages/pony/utils/utils.py", line 75, in cut_traceback
raise exc # Set "pony.options.CUT_TRACEBACK = False" to see full traceback
File "/usr/lib/python3.5/site-packages/pony/utils/utils.py", line 60, in cut_traceback
try: return func(*args, **kwargs)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 5256, in show
query._fetch().show(width)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 5155, in _fetch
used_attrs=translator.get_used_attrs())
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 3859, in _fetch_objects
real_entity_subclass, pkval, avdict = entity._parse_row_(row, attr_offsets)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 3889, in _parse_row_
avdict[attr] = attr.parse_value(row, offsets)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 1922, in parse_value
val = attr.validate(row[offset], None, attr.entity, from_db=True)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 2218, in validate
val = Attribute.validate(attr, val, obj, entity, from_db)
File "/usr/lib/python3.5/site-packages/pony/orm/core.py", line 1894, in validate
if from_db: return converter.sql2py(val)
File "/usr/lib/python3.5/site-packages/pony/orm/dbapiprovider.py", line 619, in sql2py
if not isinstance(val, buffer): val = buffer(val)
TypeError: string argument without an encoding
Am I using this wrong, or is this a bug ? I don't mind go filing a bug, but it's the first time I'm using this orm, so I thought it might be better to check first ...
SQLite has a (mis)feature, which allows a column to store an arbitrary value disregarding the column type. Instead of rigid data type, each SQLite column has an affinity, while each value has a storage class which can be different within the same column. For example, you can store text value inside an integer column, and vice versa. See Datatypes In SQLite Version 3 for more information.
The reason for the error is that the table contains values of "wrong" type in its BLOB columns. Correct SQLite binary literal looks like x'abcdef'. The INSERT commands that you use insert UTF8 strings instead.
This problem was somewhat fixed in the latest version of Pony which you can take from GitHub. Now if Pony receives a string value from a BLOB column it just keep that value without throwing an exception.
If you populate the table with Pony, it will writes BLOB data as a correct binary values, so it can read them later without any problem.

psql cast parse error during cursor.fetchall()

I have a python code which queries psql and returns a batch of results using cursor.fetchall().
It throws an exception and fails the process if a casting fails, due to bad data in the DB.
I get this exception:
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 377, in fetchall
return [self._build_row() for _ in xrange(size)]
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 891, in _build_row
self._casts[i], val, length, self)
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 71, in typecast
return caster.cast(value, cursor, length)
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 39, in cast
return self.caster(value, length, cursor)
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 311, in parse_date
raise DataError("bad datetime: '%s'" % bytes_to_ascii(value))
DataError: bad datetime: '32014-03-03'
Is there a way to tell the caster to ignore this error and parse this as a string instead of failing the entire batch?
You can "hack" the parser of psycopg2cffi to return DATE objects as strings instead:
If you look in the code you can see the registration of the DATE parser, so you can replace the serializer of DATE in your code.
import psycopg2cffi
psycopg2cffi._impl.typecasts._default_type('DATE', [1082],
psycopg2cffi._impl.typecasts.parse_string)
Of course this can be done, for every type.
change your psql query to cast and get the date column as string
e.g. select date_column_name:: to_char from table_name.

jira-python: updating issue version field gives 'TypeError: <object> is not JSON serializable error

I'm trying to update a custom field that is basically a version field using jira-python.
I have no trouble getting the set of versions for the project, finding the right version to set to, but where I'm stuck on is actually updating this custom version field.
Here are some relevant code:
pl = jira.project('PL')
versions = jira.project_versions(pl)
# assume the function below returns list of issues I want to update
issues = query_resolved_issues()
for i in issues:
# assume this function selects the right version in versions
update_version = get_right_version(i, versions)
i.update(customfield_10303=update_version)
Error occurs on the update line:
File "/usr/local/lib/python2.7/site-packages/jira/resources.py", line 352, in update
super(Issue, self).update(async=async, jira=jira, fields=data)
File "/usr/local/lib/python2.7/site-packages/jira/resources.py", line 148, in update
data = json.dumps(data)
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <JIRA Version: name=u'IT79 - 6/11/15', id=u'12902'> is not JSON serializable
I made sure that the value stored in the custom field should be the version object itself (like if I set the release version manually on an issue on JIRA and get the value of the customfield_10303 in this case I return the same object type as what I'm trying to set the object to during update. Anyone have ideas?
Based off of How do you set the fixVersion field using jira-python it should be like:
i.update(fields={ 'customfield_10303' : [{'id': update_version['id']}] })

Error when fetch Python NDB with repeated integer property

I have an app engine Python NDB Model that looks like this:
class Car(ndb.Model)
name=ndb.StringProperty()
tags=ndb.IntegerProperty(repeated=True)
when I go to fetch a Car by key I use:
car = ndb.Key('Car', long(6079586488025088)).get()
when I do this I am seeing:
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/key.py", line 532, in get
return self.get_async(**ctx_options).get_result()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 325, in get_result
self.check_success()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 371, in _help_tasklet_along
value = gen.send(val)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/context.py", line 689, in get
pbs = entity._to_pb(set_key=False).SerializePartialToString()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 3052, in _to_pb
prop._serialize(self, pb, projection=self._projection)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 1365, in _serialize
values = self._get_base_value_unwrapped_as_list(entity)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 1135, in _get_base_value_unwrapped_as_list
wrapped = self._get_base_value(entity)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 1123, in _get_base_value
return self._apply_to_values(entity, self._opt_call_to_base_type)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 1295, in _apply_to_values
value[:] = map(function, value)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 1177, in _opt_call_to_base_type
value = _BaseValue(self._call_to_base_type(value))
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 1198, in _call_to_base_type
return call(value)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 1274, in call
newvalue = method(self, value)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 1536, in _validate
(value,))
BadValueError: Expected integer, got None
if I remove that property from the model definition it returns fine, so I know it's this property. In the datastore it is listed as having a null value for that field. Any idea why this is happening and how to deal with it? Thanks!
This generally occurs when you first have a single, non-repeated property and then convert it to a repeated property. When you initially do a put(), if you have not yet set the property it will fill the value with None. However, if you then turn it into a repeated property, ndb will read this and think you want [None]. Because None is not a valid IntegerProperty, trying to serialize and put() the data will fail.
In your example it fails on a get() because after doing a get() from the datastore it tries to serialize the data and put it in memcache.
Depending on your situation, you have a couple of options:
If you are only running in the devappserver, clear your datastore by running devappserver.py --clear_datastore
Do a search for all objects with a None value and replace them with the empty list. This might look something like this:
for c in Car.query(Car.tags=None):
c.tags=[]
c.put()
Note that you have to be careful about a few things here. First, make sure that c.tags only is [None] and not [a, b, c, None], just in case. Second, if you have a lot of Cars with no tags, you won't be able to handle fixing them all in the same request. You'll either want to run on a backend, or pass the data on to Tasks for processing.
This is pretty similar to #2, but if you have very little data you could use the Datastore viewer and simply resave the entities with tags = None.

Categories

Resources