Generate a pydantic model from pydantic object - python

is it possible to create a pydantic model form an instance of a pydantic model, so that the values are maintained ? Something like this:
from pydantic import create_model,BaseModel,Field
from typing import Optional
class ExampleModel(BaseModel):
some_text: str
optional_number: Optional[float]
instance=ExampleModel(some_text="foo")
dynamic_Model=create_model("Parameters",__config__=instance.Config)
dyn_instance=dynamic_Model()
print(instance)
print(dyn_instance) #this has no attributes so it's an empty line
print("Is it equal ? "+ str(dyn_instance == instance)) #can this be true?
If you wonder about the use case. I want to build an web-app with Streamlit and Streamlit-pydantic. The later reders an UI-inputmask from a pydantic model like this:
instance_of_pydantic_model=sp.pydantic_form(model=pydanticModel, key='some key')
See it in action
This leads in a multi-page application to the problem, that the Input_mask will not display any of the user input after switch to another page an back.

If you use the create_model function properly it works:
dynamic_Model=create_model("Parameters",**vars(instance))
With Streamlit pydantic the input mask stays consistent, even with optional Fields, which are now populated with a value.

Related

Pydantic does not validate the key/values of dict fields

I have the following simple data model:
from typing import Dict
from pydantic import BaseModel
class TableModel(BaseModel):
table: Dict[str, str]
I want to add multiple tables like this:
tables = TableModel(table={'T1': 'Tea'})
print(tables) # table={'T1': 'Tea'}
tables.table['T2'] = 'coffee'
tables.table.update({'T3': 'Milk'})
print(tables) # table={'T1': 'Tea', 'T2': 'coffee', 'T3': 'Milk'}
So far everything is working as expected. However the next piece of code does not raise any error:
tables.table[1] = 2
print(tables) # table={'T1': 'Tea', 'T2': 'coffee', 'T3': 'Milk', 1: 2}
I changed tables field name to __root__. With this change as well I see the same behavior.
I also add the validate_assignment = True in the Model Config that also does not help.
How can I get the model to validate the dict fields? Am I missing something basic here?
There are actually two distinct issues here that I'll address separately.
Mutating a dict on a Pydantic model
Observed behavior
from typing import Dict
from pydantic import BaseModel
class TableModel(BaseModel):
table: Dict[str, str]
class Config:
validate_assignment = True
instance = TableModel(table={"a": "b"})
instance.table[1] = object()
print(instance)
Output: table={'a': 'b', 1: <object object at 0x7f7c427d65a0>}
Both key and value type clearly don't match our annotation of table. So, why does the assignment instance.table[1] = object() not cause a validation error?
Explanation
The reason is rather simple: There is no mechanism to enforce validation here. You need to understand what happens here from the point of view of the model.
A model can validate attribute assignment (if you configure validate_assignment = True). It does so by hooking into the __setattr__ method and running the value through the appropriate field validator(s).
But in that example above, we never called BaseModel.__setattr__. Instead, we called the __getattribute__ method that BaseModel inherits from object to access the value of instance.table. That returned the dictionary object ({"a": "b"}). And then we called the dict.__setitem__ method on that dictionary and added a key-value-pair of 1: object() to it.
The dictionary is just a regular old dictionary without any validation logic. And the mutation of that dictionary is completely obscure to the Pydantic model. It has no way of knowing that after accessing the object currently assigned to the table field, we changed something inside that object.
Validation would only be triggered, if we actually assigned a new object to the table field of the model. But that is not what happens here.
If we instead tried to do instance.table = {1: object()}, we would get a validation error because now we are actually setting the table attribute and trying to assign a value to it.
Possible workaround
Depending on how you intend to use the model, you could ensure that changes in the table dictionary will always happen "outside" of the model and are followed by a re-assignment in the form instance.table = .... I would say that is probably the most practical option. In general, re-parsing (subsets of) data should ensure consistency, if you mutated values. Something like this should work (i.e. cause an error):
tables.table[1] = 2
tables = TableModel.parse_obj(tables.dict())
Another option might be to play around and define your own subtype of Dict and add validation logic there, but I am not sure how much "reinventing the wheel" that might entail.
The most sophisticated option could maybe be a descriptor-based approach, where instead of just calling __getattribute__, a custom descriptor intercepts the attribute access and triggers the assignment validation. But that is just an idea. I have not tried this and don't know if that might break other Pydantic magic.
Implicit type coercion
Observed behavior
from typing import Dict
from pydantic import BaseModel
class TableModel(BaseModel):
table: Dict[str, str]
instance = TableModel(table={1: 2})
print(instance)
Output: table={'1': '2'}
Explanation
This is very easily explained. This is expected behavior and was put in place by choice. The idea is that if we can "simply" coerce a value to the specified type, we want to do that. Although you defined both the key and value type as str, passing an int for each is no big deal because the default string validator can just do str(1) and str(2) respectively.
Thus, instead of raising a validation error, the tables value ends up with {"1": "2"} instead.
Possible workaround
If you do not want this implicit coercion to happen, there are strict types that you can use to annotate with. In this case you could to table: Dict[StrictStr, StrictStr]. Then the previous example would indeed raise a validation error.

Cannot determine if type of field in a Pydantic model is of type List

I am trying to automatically convert a Pydantic model to a DB schema. To do that, I am recursively looping through a Pydantic model's fields to determine the type of field.
As an example, I have this simple model:
from typing import List
from pydantic import BaseModel
class TestModel(BaseModel):
tags: List[str]
I am recursing through the model using the __fields__ property as described here: https://docs.pydantic.dev/usage/models/#model-properties
If I do type(TestModel).__fields__['tags'] I see:
ModelField(name='tags', type=List[str], required=True)
I want to programatically check if the ModelField type has a List origin. I have tried the following, and none of them work:
type(TestModel).__fields__['tags'].type_ is List[str]
type(TestModel).__fields__['tags'].type_ == List[str]
typing.get_origin(type(TestModel).__fields__['tags'].type_) is List
typing.get_origin(type(TestModel).__fields__['tags'].type_) == List
Frustratingly, this does return True:
type(TestModel).__fields__['tags'].type_ is str
What is the correct way for me to confirm a field is a List type?
Pydantic has the concept of the shape of a field. These shapes are encoded as integers and available as constants in the fields module. The more-or-less standard types have been accommodated there already. If a field was annotated with list[T], then the shape attribute of the field will be SHAPE_LIST and the type_ will be T.
The type_ refers to the element type in the context of everything that is not SHAPE_SINGLETON, i.e. with container-like types. This is why you get str in your example.
Thus for something as simple as list, you can simply check the shape against that constant:
from pydantic import BaseModel
from pydantic.fields import SHAPE_LIST
class TestModel(BaseModel):
tags: list[str]
other: tuple[str]
tags_field = TestModel.__fields__["tags"]
other_field = TestModel.__fields__["other"]
assert tags_field.shape == SHAPE_LIST
assert other_field.shape != SHAPE_LIST
If you want more insight into the actual annotation of the field, that is stored in the annotation attribute of the field. With that you should be able to do all the typing related analyses like get_origin.
That means another way of accomplishing your check would be this:
from typing import get_origin
from pydantic import BaseModel
class TestModel(BaseModel):
tags: list[str]
other: tuple[str]
tags_field = TestModel.__fields__["tags"]
other_field = TestModel.__fields__["other"]
assert get_origin(tags_field.annotation) is list
assert get_origin(other_field.annotation) is tuple
Sadly, neither of those attributes are officially documented anywhere as far as I know, but the beauty of open-source is that we can just check ourselves. Neither the attributes nor the shape constants are obfuscated, protected or made private in any of the usual ways, so I'll assume these are stable (at least until Pydantic v2 drops).

How to import a Pydantic model into SQLModel?

I generated a Pydantic model and would like to import it into SQLModel. Since said model does not inherit from the SQLModel class, it is not registered in the metadata which is why
SQLModel.metadata.create_all(engine)
just ignores it.
In this discussion I found a way to manually add models:
SQLModel.metadata.tables["hero"].create(engine)
But doing so throws a KeyError for me.
SQLModel.metadata.tables["sopro"].create(engine)
KeyError: 'sopro'
My motivation for tackling the problem this way is that I want to generate an SQLModel from a simple dictionary like this:
model_dict = {"feature_a": int, "feature_b": str}
And in this SO answer, I found a working approach. Thank you very much in advance for your help!
As far as I know, it is not possible to simply convert an existing Pydantic model to an SQLModel at runtime. (At least as of now.)
There are a lot of things that happen during model definition. There is a custom meta class involved, so there is no way that you can simply substitute a regular Pydantic model class for a real SQLModel class, short of manually monkeypatching all the missing pieces.
That being said, you clarified that your actual motivation was to be able to dynamically create an SQLModel class at runtime from a dictionary of field definitions. Luckily, this is in fact possible. All you need to do is utilize the Pydantic create_model function and pass the correct __base__ and __cls_kwargs__ arguments:
from pydantic import create_model
from sqlmodel import SQLModel
field_definitions = {
# your field definitions here
}
Hero = create_model(
"Hero",
__base__=SQLModel,
__cls_kwargs__={"table": True},
**field_definitions,
)
With that, SQLModel.metadata.create_all(engine) should create the corresponding database table according to your field definitions.
See this question for more details.
Be sure to use correct form for the field definitions, as the example you gave would not be valid. As the documentation says, you need to define fields in the form of 2-tuples (or just a default value):
model_dict = {
"feature_a": (int, ...),
"feature_b": (str, ...),
"feature_c": 3.14,
}
Hope this helps.

Conditionally set FastAPI response model for route

I'm trying to return a list of objects of type Company, including only "approved" ones, and with more or less attributes depending on whether the user requesting the list is a superuser or a regular user. This is my code so far:
#router.get("/", response_model=List[schema.CompanyRegularUsers])
def get_companies(db: Session = Depends(get_db), is_superuser: bool = Depends(check_is_superuser)):
"""
If SU, also include sensitive data.
"""
if is_superuser:
return crud.get_companies_admin(db=db)
return crud.get_companies_user(db=db)
#
The function correctly returns the objects according to request (ie., only is_approved=True companies if a regular request, and both is_approved=True and is_approved=False if requested by a superuser. Problem is, both cases use schema.CompanyRegularUsers, and I'd like to use schema.CompanySuperusers when SU's make the request.
How can I achieve that feature? I.e, is there a way to conditionally set the response_model property of the decorator function?
I've tried using JSONResponse and calling Pydantic's schema.CompanySuperusers.from_orm(), but it won't work with a list of Companies...
I ended up solving the riddle by returning a custom JSONResponse. It doesn't show up in the automatic documentation, but I think I can tackle that down the road. Code is as follows, in case it helps someone else:
...
from pydantic import parse_obj_as
from fastapi.responses import JSONResponse
from fastapi.encoders import jsonable_encoder
...
#router.get("/", response_model=List[schema.CompanyRegularUsers])
def get_companies(db: Session = Depends(get_db), is_superuser: bool = Depends(check_is_superuser)):
"""
If SU, also include sensitive data.
"""
if is_superuser:
companies = parse_obj_as(List[schema.CompanyAdmin], crud.get_companies_admin(db=db))
return JSONResponse(jsonable_encoder(companies))
return crud.get_companies_user(db=db)
So, in the is_admin branch, the path operation calls pydantic's parse_obj_as in order to map the list of objects returned by SQLAlchemy's query as a list of CompanyAdmin objects. Then, it uses jsonable_encoder, the encoder FastAPI uses under the hood for every default response, to serialize the list.
Edit: Typo
You can try to use Union type operator.
Your code would become
from typing import Union
#router.get("/", response_model=List[Union[schema.CompanyRegularUsers, schema.CompanySuperUser]])
this way, you specify as response model a list of either schema.CompanyRegularUsers or schema.CompanySuperUser
Let me know if it works, since I didn't test it

Required field with sensible default

Consider the following
from pydantic import BaseModel, Field
class Model(BaseModel):
required: str
This will make required a required field for Model, however, in the FastAPI autogenerated Swagger docs it will have an example value of "string".
How can I make a required field with a sensible default? If I make a model like
from pydantic import BaseModel, Field
class Model(BaseModel):
required: str = 'Sensible default'
Then the field required is no longer required, but it shows up with a sensible default in the docs. Is there an easy workaround for this?
You can use Field() to set up those options and check.
from pydantic import BaseModel, Field
class Model(BaseModel):
something: str # required, shows "string"
something: str = None # not required, shows "string"
something: str = Field(..., example="this is the default display") # required, shows example
something: str = Field(None, example="Foobar") #not required, show example
There are a multitude of different parameters that Field() can validate against.
I haven't looked into why the (pydantic) model representation within the openapi version that ships with FastAPI leaves the asterisk out, but the field is definitely still required (try putting a null value, or anything other than string). This might just be an UI inconsistency.

Categories

Resources