Long gone are the days of creating marshmallow schemas identical to my models. I found this excellent answer that explained how I could auto generate schemas from my SQA models using a simple decorator, so I implemented it and replaced the deprecated ModelSchema for the newer SQLAlchemyAutoSchema:
def add_schema(cls):
class Schema(SQLAlchemyAutoSchema):
class Meta:
model = cls
cls.Schema = Schema
return cls
This worked great... until I bumped into a model with a bloody Enum.
The error: Object of type MyEnum is not JSON serializable
I searched online and I found this useful answer.
But I'd like to implement it as part of the decorator so that it is generated automatically as well. In other words, I'd like to automatically overwrite all Enums in my model with EnumField(TheEnum, by_value=True) when generating the schema using the add_schema decorator; that way I won't have to overwrite all the fields manually.
What would be the best way to do this?
I have found that the support for enum types that was initially suggested only works if OneOf is the only validation class that exists in field_details. I added in some argument parsing (in a rudimentary way by looking for choices after stringifying the results _repr_args() from OneOf) to check the validation classes to hopefully make this implementation more universally usable:
def add_schema(cls):
class Schema(ma.SQLAlchemyAutoSchema):
class Meta:
model = cls
fields = Schema._declared_fields
# support for enum types
for field_name, field_details in fields.items():
if len(field_details.validate) > 0:
check = str(field_details.validate[0]._repr_args)
if check.__contains__("choices") :
enum_list = field_details.validate[0].choices
enum_dict = {enum_list[i]: enum_list[i] for i in range(0, len(enum_list))}
enum_clone = Enum(field_name.capitalize(), enum_dict)
fields[field_name] = EnumField(enum_clone, by_value=True, validate=validate.OneOf(enum_list))
cls.Schema = Schema
return cls
Thank you jgozal for the initial solution, as I really needed this lead for my current project.
This is my solution:
from marshmallow import validate
from marshmallow_sqlalchemy import SQLAlchemyAutoSchema
from marshmallow_enum import EnumField
from enum import Enum
def add_schema(cls):
class Schema(SQLAlchemyAutoSchema):
class Meta:
model = cls
fields = Schema._declared_fields
# support for enum types
for field_name, field_details in fields.items():
if len(field_details.validate) > 0:
enum_list = field_details.validate[0].choices
enum_dict = {enum_list[i]: enum_list[i] for i in range(0, len(enum_list))}
enum_clone = Enum(field_name.capitalize(), enum_dict)
fields[field_name] = EnumField(enum_clone, by_value=True, validate=validate.OneOf(enum_list))
cls.Schema = Schema
return cls
The idea is to iterate over the fields in the Schema and find those that have validation (usually enums). From there we can extract a list of choices which can then be used to build an enum from scratch. Finally we overwrite the schema field with a new EnumField.
By all means, feel free to improve the answer!
Related
I'm a little new to tinkering with class inheritance in python, particularly when it comes down to using class attributes. In this case I am using a class attribute to change an argument in pydantic's Field() function. This wouldn't be too hard to do if my class contained it's own constructor, however, my class User1 is inheriting this from pydantic's BaseModel.
The idea is that I would like to be able to change the class attribute prior to creating the instance.
Please see some example code below:
from pydantic import Basemodel, Field
class User1(BaseModel):
_set_ge = None # create class attribute
item: float = Field(..., ge=_set_ge)
# avoid overriding BaseModel's __init__
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
User1._set_ge = 0 # setting the class attribute to a new value
instance = User1(item=-1)
print(instance) # item=-1.0
When creating the instance using instance = User1(item=-1) I would expect a validation error to be thrown, but it instead passes validation and simply returns the item value.
If I had my own constructor there would be little issue in changing the _set_ge, but as User1 inheriting this constructor from BaseModel, things are a little more complicated.
The eventual aim is to add this class to a fastapi endpoint as follows:
from fastapi import Fastapi
from schemas import User1
class NewUser1(User1):
pass
NewUser1._set_ge = 0
#app.post("/")
def endpoint(request: NewUser1):
return User1.item
To reduce code duplication, I aimed to use this method to easily change Field() arguments. If there is a better way, I'd be glad to consider that too.
This question is quite closely related to this unanswered one.
In the end, the #validator proposal by #hernán-alarcón is probably the best way to do this. For example:
from pydantic import Basemodel, Field, NumberNotGeError
from typing import ClassVar
class User(BaseModel):
_set_ge = ClassVar[float] # added the ClassVar typing to make clearer, but the underscore should be sufficient
item: float = Field(...)
#validator('item')
def limits(cls, v):
limit_number = cls._set_ge
if v >= limit_number:
return v
else:
raise NumberNotGeError(limit_value=limit_number)
class User1(User)
_set_ge = 0 # setting the class attribute to a new value
instance = User1(item=-1) # raises the error
I'm trying to create some objects in setUp method of Django test case. I use FactoryBoy that helps me with creating the objects. But it seems that FactoryBoy can't find any objects in the database.
factories.py
class ProductFactory(DjangoModelFactory):
...
market_category = factory.fuzzy.FuzzyChoice(list(MarketplaceCategory.objects.all()))
class Meta:
model = Product
tests.py
from django.test import TestCase
from marketplaces.models import MarketplaceCategory
class MyTestCase(TestCase):
def setUp(self) -> None:
...
self.marketplace_category = MarketplaceCategoryFactory.create()
print(MarketplaceCategory.objects.first().pk) # prints 1
self.product = ProductFactory(created_by=self.user)
As you can see, ProductFactory tries to populate Product.market_category by random MarketCategory object.
The problem is that it seems like it does not exist even when I've created it before and made sure it is in the db (it has pk).
EDIT: It chose a MarketCategory object with pk=25 but there is only one such objects in the test db with pk=1. I think it accesses Django development DB instead of testing one.
The error:
psycopg2.errors.ForeignKeyViolation: insert or update on table "products_product" violates foreign key constraint "products_product_market_category_id_2d634517_fk"
DETAIL: Key (market_category_id)=(25) is not present in table "marketplaces_marketplacecategory".
Do you have any idea why it behaves this way? It looks like the Factory is accessing the real DB instead of testdb for some reason.
Defining the "market_category" field like that is going to cause issues, the queryset that populates the choices is going to be executed at some random time whenever the module is imported and the instances returned may no longer exist. You should use a SubFactory
class ProductFactory(DjangoModelFactory):
market_category = factory.SubFactory(MarketplaceCategoryFactory)
class Meta:
model = Product
Pass the queryset directly to FuzzyChoice to get a random existing value, don't convert it to a list
class ProductFactory(DjangoModelFactory):
market_category = factory.fuzzy.FuzzyChoice(MarketplaceCategory.objects.all())
class Meta:
model = Product
This will then create an instance whenever you create a product but you can pass "market_category" to the factory to override it
class MyTestCase(TestCase):
def setUp(self) -> None:
self.marketplace_category = MarketplaceCategoryFactory.create()
self.product = ProductFactory(created_by=self.user, market_category =self.marketplace_category)
I have a Team model in my Django project. I create its custom model manager with QuerySet.as_manager().
class TeamQuerySet(models.QuerySet):
def active(self) -> "models.QuerySet[Team]":
return self.filter(is_active=True)
class Team(models.Model):
is_active = models.BooleanField()
objects = TeamQuerySet.as_manager()
When I try to execute Team.objects.active(), mypy gives the following error:
error: "Manager[Any]" has no attribute "active"
In [5]: Team.objects
Out[5]: <django.db.models.manager.ManagerFromTeamQuerySet at 0x10eee1f70>
If I was explicitly defining a TeamManager class, there would be not a problem. How can I hint the type of Django model field objects to a dynamically generated class?
Based on the Manager[Any] I assume you are already using django-stubs.
Unfortunately it looks like there's an open issue to make QuerySet.as_manager generic over the model it's attached to that has not been resolved yet.
Even if the PR addressing the issue got merged I'm afraid it wouldn't address your immediate issue because the as_manager needs to be generic over the generic QuerySet subclass used to create the manager in order for both .active to be available and attributes relating to Team be available.
In this regard this other PR, which is unfortunately quite stale, seems to properly address your issue.
I've worked around this with a little switch-a-roo for MyPy's sake:
_Q = TypeVar("_Q", bound="WorkflowQuerySet")
class WorkflowQuerySet(models.QuerySet["WorkflowModel"]):
"""
Queryset for workflow objects.
"""
def count_objects(self) -> int:
raise NotImplementedError
def latest_objects(self: _Q) -> _Q:
raise NotImplementedError
if TYPE_CHECKING:
# Create a type MyPy understands
class WorkflowManager(models.Manager["WorkflowModel"]):
def count_objects(self) -> int:
...
def latest_objects(self) -> _Q:
...
else:
WorkflowManager = WorkflowQuerySet.as_manager
class WorkflowModel(models.Model):
"""
A model that has workflow.
"""
objects = WorkflowManager()
Here is my answer using generics and typevar
from typing import Generic, TypeVar
from django.db import models
class BookQueryset(models.QuerySet['Book']):
...
class Book(models.Model):
objects: BookQueryset = BookQueryset.as_manager()
book = Book.objects.all()[0]
If you inspect book is type Book
I've got a model with a field tool_class, whose verbose name is class and differs from name:
class Tool(models.Model):
tool_class = jsonfield.JSONField(verbose_name="class")
The Serializer and ViewSet are just stock HyperlinkedModelSerializer and ModelViewSet.
So, when I POST or PUT data to the server with a key class, it is recognized fine:
'{"class": "..."}
but in the response data it is called tool_class again:
{"tool_class": "..."}
How to make it be called class always?
I can't use the name "class" for the field name, as it is a reserved word in python, but in API it absolutely must be called "class", because the API conforms to a certain open standard, which specifies this word.
Obviously, I cannot say:
class = CharField(source="tool_class")
in my ToolSerializer, because it's a SyntaxError: invalid syntax.
SIMPLE SOLUTION:
Guys in another thread suggested a great solution. You can use vars() syntax to circumvent this problem. For instance, I use the following code:
class Tool(Document):
vars()['class'] = mongoengine.fields.DictField(required=True)
Serializer creates respective field automatically. Ain't we sneaky?
I tried to find a way to have a field called class on the serializer, using some tricks with setattr, but it was getting very intrusive and hacky. The field_name is collected from the field at the time of binding the field to the serializer, and there is no easy place to override the behaviour of the bind.
In the end I decided it would be better and simpler just to let DRF do its thing, and add a post-processing step on the serializer:
class ToolSerializer(ModelSerializer):
class Meta:
model = Tool
def to_representation(self, instance):
data = super(ToolSerializer, self).to_representation(instance)
data['class'] = data.pop('tool_class')
return data
Note that the data structure returned by to_representation is an OrderedDict, and this disturbs the ordering slightly - the renamed key in this mapping will be removed from wherever it was at and pushed to the back.
That is unlikely to be an issue for most use-cases, so you shouldn't bother to address it if not necessary. If you do need to preserve ordering, rebuild a new OrderedDict using a comprehension:
data = OrderedDict(
('class' if k == 'tool_class' else k, v) for (k, v) in data.items()
)
You can do this by overriding the metaclass for Serializers. Here is an example of a serializers.py file.
The main magic is this section of the metaclass
# Remap fields (to use class instead of class_)
fields_ = []
for name, field in fields:
if name.endswith('_'):
name = name.rstrip('_')
fields_.append((name, field))
This takes any field you define in the serializer that ends in an underscore (ie. field_) and removes the underscore from the name when it binds the Fields and sets the _declared_fields attribute on the serializer.
from collections import OrderedDict
from rest_framework import serializers
from rest_framework.fields import Field
from snippets.models import Snippet, LANGUAGE_CHOICES, STYLE_CHOICES
class MyMeta(serializers.SerializerMetaclass):
#classmethod
def _get_declared_fields(cls, bases, attrs):
fields = [(field_name, attrs.pop(field_name))
for field_name, obj in list(attrs.items())
if isinstance(obj, Field)]
fields.sort(key=lambda x: x[1]._creation_counter)
# If this class is subclassing another Serializer, add that Serializer's
# fields. Note that we loop over the bases in *reverse*. This is necessary
# in order to maintain the correct order of fields.
for base in reversed(bases):
if hasattr(base, '_declared_fields'):
fields = list(base._declared_fields.items()) + fields
# Remap fields (to use class instead of class_)
fields_ = []
for name, field in fields:
if name.endswith('_'):
name = name.rstrip('_')
fields_.append((name, field))
return OrderedDict(fields_)
class ToolSerializer(serializers.Serializer):
__metaclass__ = MyMeta
...
class_ = serializers.JSONField(source='tool_class', label='class')
def create(self, validated_data):
"""
Create and return a new `Snippet` instance, given the validated data.
"""
return Snippet.objects.create(**validated_data)
def update(self, instance, validated_data):
"""
Update and return an existing `Snippet` instance, given the validated data.
"""
...
instance.class_ = validated_data.get('class', instance.class_)
instance.save()
return instance
I am generating a Django model based on an abstract model class AbstractAttr and a normal model (let's say Foo).
I want my foo/models.py to look like this:
from bar.models import Attrs
# ...
class Foo(models.Model):
....
attrs = Attrs()
In the Attrs class which mimics a field I have a contribute_to_class that generates the required model using type(). The generated model c is called FooAttr.
Everything works. If I migrate, I see FooAttr appear in the proper table.
EXCEPT FOR ONE THING.
I want to be able to from foo.models import FooAttr. Somehow my generated FooAttr class is not bound to the models.py file in which it is generated.
If I change my models.py to this:
class Foo(models.Model):
# ...
FooAttr = generate_foo_attr_class(...)
it works, but this is not what I want (for example, this forces the dev to guess the generate class name).
Is what I want possible, define the class somewhat like in the first example AND bind it to the specific models.py module?
The project (pre-Alpha) is here (in develop branch):
https://github.com/zostera/django-mav
Some relevant code:
def create_model_attribute_class(model_class, class_name=None, related_name=None, meta=None):
"""
Generate a value class (derived from AbstractModelAttribute) for a given model class
:param model_class: The model to create a AbstractModelAttribute class for
:param class_name: The name of the AbstractModelAttribute class to generate
:param related_name: The related name
:return: A model derives from AbstractModelAttribute with an object pointing to model_class
"""
if model_class._meta.abstract:
# This can't be done, because `object = ForeignKey(model_class)` would fail.
raise TypeError("Can't create attrs for abstract class {0}".format(model_class.__name__))
# Define inner Meta class
if not meta:
meta = {}
meta['app_label'] = model_class._meta.app_label
meta['db_tablespace'] = model_class._meta.db_tablespace
meta['managed'] = model_class._meta.managed
meta['unique_together'] = list(meta.get('unique_together', [])) + [('attribute', 'object')]
meta.setdefault('db_table', '{0}_attr'.format(model_class._meta.db_table))
# The name of the class to generate
if class_name is None:
value_class_name = '{name}Attr'.format(name=model_class.__name__)
else:
value_class_name = class_name
# The related name to set
if related_name is None:
model_class_related_name = 'attrs'
else:
model_class_related_name = related_name
# Make a type for our class
value_class = type(
str(value_class_name),
(AbstractModelAttribute,),
dict(
# Set to same module as model_class
__module__=model_class.__module__,
# Add a foreign key to model_class
object=models.ForeignKey(
model_class,
related_name=model_class_related_name
),
# Add Meta class
Meta=type(
str('Meta'),
(object,),
meta
),
))
return value_class
class Attrs(object):
def contribute_to_class(self, cls, name):
# Called from django.db.models.base.ModelBase.__new__
mav_class = create_model_attribute_class(model_class=cls, related_name=name)
cls.ModelAttributeClass = mav_class
I see you create the model from within models.py, so I think you should be able to add it to the module's globals. How about this:
new_class = create_model_attribute_class(**kwargs)
globals()[new_class.__name__] = new_class
del new_class # no need to keep original around
Thanks all for thinking about this. I have updated the source code of the project at GitHub and added more tests. See https://github.com/zostera/django-mav
Since the actual generation of the models is done outside of foo/models.py (it takes place in mav/models.py, it seems Pythonically impossible to link the model to foo/models.py. Also, after rethinking this, it seems to automagically for Python (explicit is better, no magic).
So my new strategy is to use simple functions, a decorator to make it easy to add mav, and link the generated models to mac/attrs.py, so I can universally from mav.attrs import FooAttr. I also link the generated class to the Foo model as Foo._mav_class.
(In this comment, Foo is of course used as an example model that we want to add model-attribute-value to).