Elixir not creating my tables with default values - python

class MyObject(Entity):
name = Field(Unicode(256), default=u'default name', nullable=False)
using_options(shortnames=True)
using_mapper_options(save_on_init=False)
def __init__(self):
self.name = None
I am using MySQL in this case, but have also checked against SQLite and I get the same result. It respects nullable, but ignores default entirely. I don't get any error messages, and it creates the tables just fine. I could go back through and add the defaults, but this is a serious pain that I would like to avoid if possible.
I've tried it with other Field types, but still no joy.

default keyword argument in SQLAlchemy is for Python runtime default value, it's used for INSERT statements. Use PassiveDefault() object as positional argument when you really need database level default.

Related

python jsonschema remove additional and use defaults

I'm using the python jsonschema https://python-jsonschema.readthedocs.io/en/latest/
and I'm trying to find how to use default values and remove additional fields when found.
anyone know how am I suppose to do it?
or maybe have another solution to validate jsonschema that supports default values and remove any additional field (like js avj)?
Hidden in the FAQs you'll find this
Why doesn’t my schema’s default property set the default on my
instance? The basic answer is that the specification does not require
that default actually do anything.
For an inkling as to why it doesn’t actually do anything, consider
that none of the other validators modify the instance either. More
importantly, having default modify the instance can produce quite
peculiar things. It’s perfectly valid (and perhaps even useful) to
have a default that is not valid under the schema it lives in! So an
instance modified by the default would pass validation the first time,
but fail the second!
Still, filling in defaults is a thing that is useful. jsonschema
allows you to define your own validator classes and callables, so you
can easily create an jsonschema.IValidator that does do default
setting. Here’s some code to get you started. (In this code, we add
the default properties to each object before the properties are
validated, so the default values themselves will need to be valid
under the schema.)
from jsonschema import Draft4Validator, validators
def extend_with_default(validator_class):
validate_properties = validator_class.VALIDATORS["properties"]
def set_defaults(validator, properties, instance, schema):
for property, subschema in properties.iteritems():
if "default" in subschema:
instance.setdefault(property, subschema["default"])
for error in validate_properties(
validator, properties, instance, schema,
):
yield error
return validators.extend(
validator_class, {"properties" : set_defaults},
)
DefaultValidatingDraft4Validator = extend_with_default(Draft4Validator)
# Example usage:
obj = {}
schema = {'properties': {'foo': {'default': 'bar'}}}
# Note jsonschem.validate(obj, schema, cls=DefaultValidatingDraft4Validator)
# will not work because the metaschema contains `default` directives.
DefaultValidatingDraft4Validator(schema).validate(obj)
assert obj == {'foo': 'bar'}
From: https://python-jsonschema.readthedocs.io/en/latest/faq/#why-doesn-t-my-schema-s-default-property-set-the-default-on-my-instance

Dialect-specific SQLAlchemy declarative Column defaults

Short Version
In SQLAlchemy's ORM column declaration, how can I use server_default=sa.FetchedValue() on one dialect, and default=somePythonFunction on another, so that my real DBMS can populate things with triggers, and my test code can be written against sqlite?
Background
I'm using SQLAlchemy's declarative ORM to work with a Postgres database, but trying to write unit tests against an sqlite:///:memory:, and running into a problem with columns that have computed defaults on their primary keys. For a minimal example:
CREATE TABLE test_table(
id VARCHAR PRIMARY KEY NOT NULL
DEFAULT (lower(hex(randomblob(16))))
)
SQLite itself is quite happy with this table definition (sqlfiddle) but SQLAlchemy seems unable to work out the ID of newly created rows.
class TestTable(Base):
__tablename__ = 'test_table'
id = sa.Column(
sa.VARCHAR,
primary_key=True,
server_default=sa.FetchedValue())
Definitions like this work just fine in postgres, but die in sqlite (as you can see on Ideone) with a FlushError when I call Session.commit:
sqlalchemy.orm.exc.FlushError: Instance <TestTable at 0x7fc0e0254a10> has a NULL identity key. If this is an auto-generated value, check that the database table allows generation of new primary key values, and that the mapped Column object is configured to expect these generated values. Ensure also that this flush() is not occurring at an inappropriate time, such as within a load() event.
The documentation for FetchedValue warns us that this can happen on dialects that don't support the RETURNING clause on INSERT:
For special situations where triggers are used to generate primary key
values, and the database in use does not support the RETURNING clause,
it may be necessary to forego the usage of the trigger and instead
apply the SQL expression or function as a “pre execute” expression:
t = Table('test', meta,
Column('abc', MyType, default=func.generate_new_value(),
primary_key=True)
)
func.generate_new_value is not defined anywhere else in SQLAlchemy, so it seems they intend I either generate defaults in Python, or else write a separate function to do a SQL query to generate a default value in the DBMS. I can do that, but the problem is, I only want to do that for SQLite, since FetchedValue does exactly what I want on postgres.
Dead Ends
Subclassing Column probably won't work. Nothing that I can find in the sources ever tells the Column what dialect is being used, and the behavior of the default and server_default fields is defined outside the class
Writing a python function that calls the triggers by hand on the real DBMS creates a race condition. Avoiding the race condition by changing the isolation level creates a deadlock.
My Current Workaround
Bad because it breaks integration tests that connect to a real postgres.
import sys
import sqlalchemy as sa
def trigger_column(*a, **kw):
python_default = kw.pop('python_default')
if 'unittest' in sys.modules:
return sa.Column(*a, default=python_default, **kw)
else
return sa.Column(*a, server_default=sa.FetchedValue(), **kw)
Not a direct answer to you question but hopefully helpful to someone
My problem was wanting to change the collation depending on the dialect, this was my solution:
from sqlalchemy import Unicode
from sqlalchemy.ext.compiler import compiles
#compiles(Unicode, 'sqlite')
def compile_unicode(element, compiler, **kw):
element.collation = None
return compiler.visit_unicode(element, **kw)
This changes the collation for all Unicode columns only for sqlite.
Here's some documentation: http://docs.sqlalchemy.org/en/latest/core/custom_types.html#overriding-type-compilation

Adding Naming Convention to Existing Database

I'm using sqlalchemy and am trying to integrate alembic for database migrations.
My database currently exists and has a number of ForeignKeys defined without names. I would like to add a naming convention to allow for migrations that affect ForeignKey columns.
I've added the naming convention given here to the top of my models.py file:
SQLAlchemy Naming Constraints
convention = {
"ix": 'ix_%(column_0_label)s',
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
}
DeclarativeBase = declarative_base()
DeclarativeBase.metadata = MetaData(naming_convention=convention)
def db_connect():
return create_engine(URL(**settings.DATABASE))
def create_reviews_table(engine):
DeclarativeBase.metadata.create_all(engine)
class Review(DeclarativeBase):
__tablename__ = 'reviews'
id = Column(Integer, primary_key=True)
review_id = Column('review_id', String, primary_key=True)
resto_id = Column('resto_id', Integer, ForeignKey('restaurants.id'),
nullable=True)
url = Column('url', String),
resto_name = Column('resto_name', String)
I've set up alembic/env.py as per the tutorial instructions, feeding my model's metadata into target_metadata.
When I run
$: alembic current
I get the following error:
sqlalchemy.exc.InvalidRequestError: Naming convention including %(constraint_name)s token requires that constraint is explicitly named.
In the docs they say that "This same feature [generating names for columns using a naming convention] takes effect even if we just use the Column.unique flag:" 1, so I'm thinking that there shouldn't be a problem (they go on to give an example using a ForeignKey that isn't named too).
Do I need to go back and give all my constraints explicit names, or is there a way to do it automatically?
just modify th "ck" in convention to "ck": "ck_%(table_name)s_%(column_0_name)s"。it works for me .
refer to see sqlalchemy docs
What this error message is telling you is that you should name constraints explicitly. The constraints it's referring to are Boolean, Enum etc but not foreignkeys nor primary keys.
So go through your table, wherever you have a Boolean or Enum add a name to it. For example:
is_active = Column(Boolean(name='is_active'))
That's what you need to do.
This does not aim to be definitive answer and also fails to answer your immediate technical question, but could it be a "philosophical problem"? Either your SQLAlchemy code is the source of truth as far as the database is concerned, or the RDMS is the source. In front of this a mixed situation, where each of the two have part of it, I would see two avenues:
The one that you are exploring: you modify the database's schema to match the SQLAlchemy model and you make your Python code the master. This is the most intuitive, but this may not always be possible, both for technical and administrative reasons.
Accepting that the RDMS has info that SQLAlchemy doesn't have, but is fortunately not relevant for day-to-day work. Your best chance is to use another migration tool (ETL) that will reverse engineer the database before migrating it. After the migration is complete you could give back control of the new instance to SQLAlchemy (which may require some adjustments to the new DB or to the model).
There is no way to tell which approach will work, since both have their own challenges. But I would give some thought to the second method?
I've had some luck altering naming_convention back to {} in each older migration so that they run with the correct historical context.
Still entirely unsure what kind of interesting side-effects this might have.

web2py, one to many, not required relationship

Consider 2 tables in the web2py python web framework.
The default auth_user
and this one:
b.define_table(
'artwork',
Field('user_id', 'reference auth_user', required=False),
Field('name'),
Field('description', 'text')
)
Now, when I go to the Database Administration in (appadmin), I would expect the user_id to be optional. If I let the selection drop down empty, when manually entering an entry in the artwork table, it says : "value not in database" which seams against the required=False statement.
I would like to be able to insert an artwork entry without user_id
Can someone help me resolve this one? Thx a lot
When you create a reference field, by default it gets a requires=IS_IN_DB(...) validator. The requires attribute is enforced at the level of forms, whereas the required attribute is enforced at the level of the DAL. To override the default form validator, you can do:
Field('user_id', 'reference auth_user', requires=None)
or alternatively,
Field('user_id', 'reference auth_user',
requires=IS_EMPTY_OR(IS_IN_DB(db, 'auth_user.id',
db.auth_user._format)))
#Anthony's answer was right, however, nowadays it is no longer needed to explicitly write the IS_EMPTY_OR(...) part for a referenced field, because it would be set automatically since web2py 2.16.x or later.

Instantiating Multiple AbstractConcreteBase Issue

I'm getting an error I don't understand with AbstractConcreteBase
in my_enum.py
class MyEnum(AbstractConcreteBase, Base):
pass
in enum1.py
class Enum1(MyEnum):
years = Column(SmallInteger, default=0)
# class MyEnums1:
# NONE = Enum1()
# Y1 = Enum1(years=1)
in enum2.py
class Enum2(MyEnum):
class_name_python = Column(String(50))
in test.py
from galileo.copernicus.basic_enum.enum1 import Enum1
from galileo.copernicus.basic_enum.enum2 import Enum2
#...
If I uncomment the three lines in enum1.py I get the following error on the second import.
AttributeError: type object 'MyEnum' has no attribute 'table'
but without MyEnums1 it works fine or with MyEnums1 in a separate file it works fine. Why would this instantiation affect the import? Is there anyway I can keep MyEnums1 in the same file?
the purpose of the abstractconcretebase is to apply a non-standard order of operations to the standard mapping procedure. normally, mapping works like this:
define a class to be mapped
define a Table
map the class to the Table using mapper().
Declarative essentially combines these three steps, but that's what it does.
When using an abstract concrete base, we have this totally special step that needs to happen - the base class needs to be mapped to a union of all the tables that the subclasses are mapped to. So if you have enum1 and enum2, the "Base" needs to map to essentially "select * from enum1 UNION ALL select * from enum2".
This mapping to a UNION can't happen piecemeal; the MyEnum base class has to present itself to mapper() with the full UNION of every sub-table at once. So AbstractConcreteBase performs the complex task of rearranging how declarative works such that the base MyEnum is not mapped at all until the mapper configuration occurs, which among other places occurs when you first instantiate a mapped class. It then inserts itself as the mapped base for all the existing mapped subclasses.
So basically by instantiating an Enum1() object at the class level like that, you're invoking configure_mappers() way too early, such that by the time Enum2() comes along the abstractconcretebase is baked and the process fails.
All of that aside, it's not at all correct to be instantiating a mapped class like Enum1() at the class level like that. ORM-mapped objects are the complete opposite of global objects and must always be created local to a specific Session.
edit: also those classes are supposed to have {"concrete": True} on them which is part of why you're getting this message. Im trying to see if the message can be improved.
edit 2: yeah the mechanics here are weird. I've committed something else that skips this particular error message, though it will fail differently now and not much better. getting this to fail more gracefully would require a little more work.

Categories

Resources