Mypy Config Class - python

I am currently doing some work where I want to load a config from yaml into a class so that class attributes get specified. Depending on the work I am doing I have a different config class.
In the class I pre-specify the expected arguments as follows and then load data in from the file using BaseConfig::load_config(). This checks to see if the class has the corresponding attribute before using setattr() to assign the value.
class AutoencoderConfig(BaseConfig):
def __init__(self, config_path: Path) -> None:
super().__init__(config_path)
self.NTRAIN: Optional[int] = None
self.NVALIDATION: Optional[int] = None
# ... more below
self.load_config()
When I am referring to attributes of the AutoencoderConfig in other scripts, I might say something like: config.NTRAIN. While this works just fine, Mypy does not like this and gives the following error:
path/to/file.py:linenumber: error: Unsupported operand types for / ("None" and "int")
The error arises because at the time of the check, the values from the yaml file have not been loaded yet, and the types are specified with self.NTRAIN: Optional[int] = None.
Is there anyway I can avoid this without having to place: # type: ignore at the end of every line that I refer to the config class?
Are there any best-practices for using a config class like this?
EDIT:
Solution:
The solution provided by #SUTerliakov below works perfectly fine. In the end I decided to adopt a #dataclass approach:
from dataclasses import dataclass, field
#dataclass
class BaseConfig:
_config: Dict[str, Any] = field(default_factory=dict, repr=False)
def load_config(self, config_path: Path) -> None:
...
#dataclass
class AutoencoderConfig(BaseConfig):
NTRAIN: int = field(init=False)
NVALIDATION: int = field(init=False)
# ... more below
This allows me to specify each variable as a field(init=False) which seems to be a nice, explicit way to solve the problem.

Is it always int after load_config is called? (can it remain None?)
If it is always int, then do not mark them as Optional and don't assign None. Just declare them in class body without initialization:
class AutoencoderConfig(BaseConfig):
NTRAIN: int
NVALIDATION: int
def __init__(self, config_path: Path):
super().__init__(config_path)
# Whatever else
self.load_config()
Otherwise you have to explicitly check for None before actions. type: ignore is a bad solution for this case as this will just silence real problem: Optional is used to say that variable can be None and you cannot divide None by integer, so it can cause runtime issues. In this case proper solution will be something like this:
if self.NTRAIN is not None:
foo = self.NTRAIN / 2
else:
raise ValueError('NTRAIN required!')
# Or
# foo = some_default_foo
# If you can handle such case
If you know that after some point it can't be None, then the most clean is to assert it like this:
assert self.NTRAIN is not None
foo = self.NTRAIN / 2
Both methods will make mypy happy, because NTRAIN is really not None in conditional branch or after assert.
EDIT: sorry, I probably misread your question. Does load_config need hasattr(AutoencoderConfig, 'NTRAIN') to be True to work? If so, the easiest solution will be
class AutoencoderConfig(BaseConfig):
NTRAIN: int = None # type: ignore[assignment]
NVALIDATION: int = None # type: ignore[assignment]
def __init__(self, config_path: Path):
super().__init__(config_path)
# Whatever else
self.load_config()
You won't need any other type-ignores this way.

Related

Make Pydantic BaseModel fields optional including sub-models for PATCH

As already asked in similar questions, I want to support PATCH operations for a FastApi application where the caller can specify as many or as few fields as they like, of a Pydantic BaseModel with sub-models, so that efficient PATCH operations can be performed, without the caller having to supply an entire valid model just in order to update two or three of the fields.
I've discovered there are 2 steps in Pydantic PATCH from the tutorial that don't support sub-models. However, Pydantic is far too good for me to criticise it for something that it seems can be built using the tools that Pydantic provides. This question is to request implementation of those 2 things while also supporting sub-models:
generate a new DRY BaseModel with all fields optional
implement deep copy with update of BaseModel
These problems are already recognised by Pydantic.
There is discussion of a class based solution to the optional model
And there two issues open on the deep copy with update
A similar question has been asked one or two times here on SO and there are some great answers with different approaches to generating an all-fields optional version of the nested BaseModel. After considering them all this particular answer by Ziur Olpa seemed to me to be the best, providing a function that takes the existing model with optional and mandatory fields, and returning a new model with all fields optional: https://stackoverflow.com/a/72365032
The beauty of this approach is that you can hide the (actually quite compact) little function in a library and just use it as a dependency so that it appears in-line in the path operation function and there's no other code or boilerplate.
But the implementation provided in the previous answer did not take the step of dealing with sub-objects in the BaseModel being patched.
This question therefore requests an improved implementation of the all-fields-optional function that also deals with sub-objects, as well as a deep copy with update.
I have a simple example as a demonstration of this use-case, which although aiming to be simple for demonstration purposes, also includes a number of fields to more closely reflect the real world examples we see. Hopefully this example provides a test scenario for implementations, saving work:
import logging
from datetime import datetime, date
from collections import defaultdict
from pydantic import BaseModel
from fastapi import FastAPI, HTTPException, status, Depends
from fastapi.encoders import jsonable_encoder
app = FastAPI(title="PATCH demo")
logging.basicConfig(level=logging.DEBUG)
class Collection:
collection = defaultdict(dict)
def __init__(self, this, that):
logging.debug("-".join((this, that)))
self.this = this
self.that = that
def get_document(self):
document = self.collection[self.this].get(self.that)
if not document:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Not Found",
)
logging.debug(document)
return document
def save_document(self, document):
logging.debug(document)
self.collection[self.this][self.that] = document
return document
class SubOne(BaseModel):
original: date
verified: str = ""
source: str = ""
incurred: str = ""
reason: str = ""
attachments: list[str] = []
class SubTwo(BaseModel):
this: str
that: str
amount: float
plan_code: str = ""
plan_name: str = ""
plan_type: str = ""
meta_a: str = ""
meta_b: str = ""
meta_c: str = ""
class Document(BaseModel):
this: str
that: str
created: datetime
updated: datetime
sub_one: SubOne
sub_two: SubTwo
the_code: str = ""
the_status: str = ""
the_type: str = ""
phase: str = ""
process: str = ""
option: str = ""
#app.get("/endpoint/{this}/{that}", response_model=Document)
async def get_submission(this: str, that: str) -> Document:
collection = Collection(this=this, that=that)
return collection.get_document()
#app.put("/endpoint/{this}/{that}", response_model=Document)
async def put_submission(this: str, that: str, document: Document) -> Document:
collection = Collection(this=this, that=that)
return collection.save_document(jsonable_encoder(document))
#app.patch("/endpoint/{this}/{that}", response_model=Document)
async def patch_submission(
document: Document,
# document: optional(Document), # <<< IMPLEMENT optional <<<
this: str,
that: str,
) -> Document:
collection = Collection(this=this, that=that)
existing = collection.get_document()
existing = Document(**existing)
update = document.dict(exclude_unset=True)
updated = existing.copy(update=update, deep=True) # <<< FIX THIS <<<
updated = jsonable_encoder(updated)
collection.save_document(updated)
return updated
This example is a working FastAPI application, following the tutorial, and can be run with uvicorn example:app --reload. Except it doesn't work, because there's no all-optional fields model, and Pydantic's deep copy with update actually overwrites sub-models rather than updating them.
In order to test it the following Bash script can be used to run curl requests. Again I'm supplying this just to hopefully make it easier to get started with this question.
Just comment out the other commands each time you run it so that the command you want is used.
To demonstrate this initial state of the example app working you would run GET (expect 404), PUT (document stored), GET (expect 200 and same document returned), PATCH (expect 200), GET (expect 200 and updated document returned).
host='http://127.0.0.1:8000'
path="/endpoint/A123/B456"
method='PUT'
data='
{
"this":"A123",
"that":"B456",
"created":"2022-12-01T01:02:03.456",
"updated":"2023-01-01T01:02:03.456",
"sub_one":{"original":"2022-12-12","verified":"Y"},
"sub_two":{"this":"A123","that":"B456","amount":0.88,"plan_code":"HELLO"},
"the_code":"BYE"}
'
# method='PATCH'
# data='{"this":"A123","that":"B456","created":"2022-12-01T01:02:03.456","updated":"2023-01-02T03:04:05.678","sub_one":{"original":"2022-12-12","verified":"N"},"sub_two":{"this":"A123","that":"B456","amount":123.456}}'
method='GET'
data=''
if [[ -n data ]]; then data=" --data '$data'"; fi
curl="curl -K curlrc -X $method '$host$path' $data"
echo $curl >&2
eval $curl
This curlrc will need to be co-located to ensure the content type headers are correct:
--cookie "_cookies"
--cookie-jar "_cookies"
--header "Content-Type: application/json"
--header "Accept: application/json"
--header "Accept-Encoding: compress, gzip"
--header "Cache-Control: no-cache"
So what I'm looking for is the implementation of optional that is commented out in the code, and a fix for existing.copy with the update parameter, that will enable this example to be used with PATCH calls that omit otherwise mandatory fields.
The implementation does not have to conform precisely to the commented out line, I just provided that based on Ziur Olpa's previous answer.
When I first posed this question I thought that the only problem was how to turn all fields Optional in a nested BaseModel, but actually that was not difficult to fix.
The real problem with partial updates when implementing a PATCH call is that the Pydantic BaseModel.copy method doesn't attempt to support nested models when applying it's update parameter. That's quite an involved task for the generic case, considering you may have fields that are dicts, lists, or sets of another BaseModel, just for instance. Instead it just unpacks the dict using **: https://github.com/pydantic/pydantic/blob/main/pydantic/main.py#L353
I haven't got a proper implementation of that for Pydantic, but since I've got a working example PATCH by cheating, I'm going to post this as an answer and see if anyone can fault it or provide better, possibly even with an implementation of BaseModel.copy that supports updates for nested models.
Rather than post the implementations separately I am going to update the example given in the question so that it has a working PATCH and being a full demonstration of PATCH hopefully this will help others more.
The two additions are partial and merge. partial is what's referred to as optional in the question code.
partial:
This is a function that takes any BaseModel and returns a new BaseModel with all fields Optional, including sub-object fields. That's enough for Pydantic to allow through any sub-set of fields without throwing an error for "missing fields". It's recursive - not really popular - but given these are nested data models the depth is not expected to exceed single digits.
merge:
The BaseModel update on copy method operates on an instance of BaseModel - but supporting all the possible type variations when descending through a nested model is the hard part - and the database data, and the incoming update, are easily available as plain Python dicts; so this is the cheat: merge is an implementation of a nested dict update instead, and since the dict data has already been validated at one point or other, it should be fine.
Here's the full example solution:
import logging
from typing import Optional, Type
from datetime import datetime, date
from functools import lru_cache
from pydantic import BaseModel, create_model
from collections import defaultdict
from pydantic import BaseModel
from fastapi import FastAPI, HTTPException, status, Depends, Body
from fastapi.encoders import jsonable_encoder
app = FastAPI(title="Nested model PATCH demo")
logging.basicConfig(level=logging.DEBUG)
class Collection:
collection = defaultdict(dict)
def __init__(self, this, that):
logging.debug("-".join((this, that)))
self.this = this
self.that = that
def get_document(self):
document = self.collection[self.this].get(self.that)
if not document:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Not Found",
)
logging.debug(document)
return document
def save_document(self, document):
logging.debug(document)
self.collection[self.this][self.that] = document
return document
class SubOne(BaseModel):
original: date
verified: str = ""
source: str = ""
incurred: str = ""
reason: str = ""
attachments: list[str] = []
class SubTwo(BaseModel):
this: str
that: str
amount: float
plan_code: str = ""
plan_name: str = ""
plan_type: str = ""
meta_a: str = ""
meta_b: str = ""
meta_c: str = ""
class SubThree(BaseModel):
one: str = ""
two: str = ""
class Document(BaseModel):
this: str
that: str
created: datetime
updated: datetime
sub_one: SubOne
sub_two: SubTwo
# sub_three: dict[str, SubThree] = {} # Hah hah not really
the_code: str = ""
the_status: str = ""
the_type: str = ""
phase: str = ""
process: str = ""
option: str = ""
#lru_cache
def partial(baseclass: Type[BaseModel]) -> Type[BaseModel]:
"""Make all fields in supplied Pydantic BaseModel Optional, for use in PATCH calls.
Iterate over fields of baseclass, descend into sub-classes, convert fields to Optional and return new model.
Cache newly created model with lru_cache to ensure it's only created once.
Use with Body to generate the partial model on the fly, in the PATCH path operation function.
- https://stackoverflow.com/questions/75167317/make-pydantic-basemodel-fields-optional-including-sub-models-for-patch
- https://stackoverflow.com/questions/67699451/make-every-fields-as-optional-with-pydantic
- https://github.com/pydantic/pydantic/discussions/3089
- https://fastapi.tiangolo.com/tutorial/body-updates/#partial-updates-with-patch
"""
fields = {}
for name, field in baseclass.__fields__.items():
type_ = field.type_
if type_.__base__ is BaseModel:
fields[name] = (Optional[partial(type_)], {})
else:
fields[name] = (Optional[type_], None) if field.required else (type_, field.default)
# https://docs.pydantic.dev/usage/models/#dynamic-model-creation
validators = {"__validators__": baseclass.__validators__}
return create_model(baseclass.__name__ + "Partial", **fields, __validators__=validators)
def merge(original, update):
"""Update original nested dict with values from update retaining original values that are missing in update.
- https://github.com/pydantic/pydantic/issues/3785
- https://github.com/pydantic/pydantic/issues/4177
- https://docs.pydantic.dev/usage/exporting_models/#modelcopy
- https://github.com/pydantic/pydantic/blob/main/pydantic/main.py#L353
"""
for key in update:
if key in original:
if isinstance(original[key], dict) and isinstance(update[key], dict):
merge(original[key], update[key])
elif isinstance(original[key], list) and isinstance(update[key], list):
original[key].extend(update[key])
else:
original[key] = update[key]
else:
original[key] = update[key]
return original
#app.get("/endpoint/{this}/{that}", response_model=Document)
async def get_submission(this: str, that: str) -> Document:
collection = Collection(this=this, that=that)
return collection.get_document()
#app.put("/endpoint/{this}/{that}", response_model=Document)
async def put_submission(this: str, that: str, document: Document) -> Document:
collection = Collection(this=this, that=that)
return collection.save_document(jsonable_encoder(document))
#app.patch("/endpoint/{this}/{that}", response_model=Document)
async def patch_submission(
this: str,
that: str,
document: partial(Document), # <<< IMPLEMENTED partial TO MAKE ALL FIELDS Optional <<<
) -> Document:
collection = Collection(this=this, that=that)
existing_document = collection.get_document()
incoming_document = document.dict(exclude_unset=True)
# VVV IMPLEMENTED merge INSTEAD OF USING BROKEN PYDANTIC copy WITH update VVV
updated_document = jsonable_encoder(merge(existing_document, incoming_document))
collection.save_document(updated_document)
return updated_document

Static type hint for cache endpoints with mypy

I have a cache system which use Redis. To interact with it, I use two functions
def get_from_cache(key: str, default=None) -> Any:
"""
Return data from the redis Cache
If the data doesn't exist, return default
"""
data = redis_instance.get(key)
if data is None:
return default
return pickle.loads(data)
def save_to_cache(key: str, value: Any, **kwargs):
"""
Save data into the Redis Cache
If the data is None, delete the redis key
"""
if value is None:
redis_instance.delete(key[0])
else:
pickled_data = pickle.dumps(value)
redis_instance.set(key[0], pickled_data, **kwargs)
Because each cache key can only store specific binarized data, I would like to ensure that when I'm calling save_to_cache with a bad Data Type, MyPy will raise an error.
Since all my key values are stored into a specific file to ensure all my program use the same key for the same things, I was thinking about giving key argument a specific type, something like this:
_T = TypeVar('_T')
def get_from_cache(key: CacheEndpoint[_T], default: Optional[_T]=None) -> _T:
...
def save_to_cache(key: CacheEndpoint[_T], value: Optional[_T], **kwargs):
...
But I still need to use key as a string at some point. So I don't really know ho I could declare such a CacheEndpoint custom type ?
(Another solution would be to make a type check during execution, but it seems quite complicated to do and really inefficient).

How to check if a Python Generic type has a specific attribute?

Rant first, code will be after!
Right now, I'm attempting to create a command line interface module for a generic shop system I can use in any project going forward. I'm using MyPy since I frankly don't trust myself to keep types straight and all that, so due to the attempted genericness of this project that's lead to me using some stuff I've never used before. The goal is that I can have a store hold and sell whatever items I want as long as they have a .name.
Anyway, while writing the function to remove an item from the user's cart by checking an entered name (for example, if I have an item with name "sword" in my cart, I would type "remove sword" into the input), I realized I had no way of narrowing down the inputted types so that I could be sure that the ItemClass (the name of my TypeVar) has a .name. Since I had not narrowed this down, attempting to access .name leads to MyPy throwing a fit. I know type-checker driven development is a bad habit, but I really don't want to have to slap an ugly "Any" on there.
For testing purposes, the class I'm using for ItemClass is simply a hashable dataclass with a .name attribute, but ideally it could be any hashable class with a .name atrribute.
Here is the offending function. Note that this sits inside a Shop class, which has a self.cart attribute which stores ItemClass objects as keys and some associated metadata in a tuple as the value:
def remove_from_cart(self, item_name):
match: list[ItemClass] = [
item.name for item in self.cart.keys() if item.name == item_name
]
if match:
self.cart.pop(match[0])
MyPy throws a fit on the middle line of the list comprehension, because I have not ensured that the type used for ItemClass has a .name. Ideally, I would want to make it so that the type checker will throw an error if I even try to use a class which doesn't have the required attribute. If at all possible, I'd like to try my absolute best to avoid having to make this check at runtime, just because I'd prefer to be able to have the checker say "Hey, this type you're attempting to sell-it doesn't have a .name!" before I've even made it to the F5 button.
Thanks in advance,
HobbitJack
EDIT 1:
My Shop class:
class Shop(Generic[ItemClass, PlayerClass]):
def __init__(self, items: dict[ItemClass, tuple[Decimal, int]]):
self.inventory: dict[ItemClass, tuple[Decimal, int]] = items
self.cart: dict[ItemClass, tuple[Decimal, int]] = {}
def inventory_print(self):
for item, metadata in self.inventory.items():
print(f"{item}: {metadata}")
def add_to_cart(self, item: ItemClass, price: Decimal, amount: int):
self.cart[item] = (price, amount)
def remove_from_cart(self, item_name):
match: list[ItemClass] = [
item.name for item in self.cart if item.name == item_name
]
if match:
self.cart.pop(match[0])
def print_inventory(self):
for item, metadata in self.inventory.items():
print(f"{item}: {metadata[1]} * ${metadata[0]:.2f}")
def cart_print(self):
total_price = Decimal(0)
for item, metadata in self.inventory.items():
total_price += metadata[0]
print(f"{item}: {metadata[1]} * ${metadata[0]:.2f}")
print(f"Total price: {total_price:.2f}")
and my test Item class:
#dataclass(frozen=True)
class Item:
name: str
durability: float
with the MyPy error message:
"ItemClass" has no attribute "name"
EDIT 2:
Here goes for an attempt at a simple version of what I've got above:
In module 1, simple_test.py, we have a simple dataclass:
from dataclasses import dataclass
#dataclass
class MyDataclass:
my_hello_world: str
In module 2, we attempt to use a TypeVar to print the .my_hello_world of a MyDataclass object:
from typing import TypeVar
import simple_test
MyTypeVar = TypeVar("MyTypeVar")
def hello_print(string_container: MyTypeVar):
print(string_container.my_hello_world)
hello_print(simple_test.MyDataclass("Hello, World!"))
MyPy complains on the print line of hello_print() with the error message "MyTypeVar" has no attribute "my_hello_world".
My goal is that I would be able to narrow down the type being passed into hello_print into one which definitely has a .my_hello_world so that it stops complaining.
You can bind ItemClass type variable to the protocol that defines required attribute. It will be compatible with any object that has name which is a string.
from dataclasses import dataclass
from typing import Protocol, TypeVar
class HasName(Protocol):
name: str
ItemClass = TypeVar('ItemClass', bound=HasName)
#dataclass
class MyDataclass:
name: str
def hello_print(string_container: ItemClass):
print(string_container.name)
hello_print(MyDataclass("Hello, World!"))
Now hello_print knows only that string_container has name attribute which is a string. When you pass MyDataClass to it, mypy checks that it does really have string name attribute. If it does, type checking succeeds. If you try to pass
#dataclass
class AnotherDataclass:
foo: str
hello_print(AnotherDataclass('Foo'))
mypy will complain that AnotherDataclass is not compatible with ItemClass. You can try this in playground.

Pycharm type-hinting in __init__ does not work

I am using Python 2.7 and PyCharm Community Edition 2016.3.2. I have the following snippet of code:
class ParentNode(node.Node):
"""ParentNode is a subclass of Node but also takes an additional child_nodes parameter.
#type child_nodes: dict[str: child_node.ChildNode]
"""
def __init__(self, name, node_type, FSPs, CTT_distributions, TMR_distributions,
prob_on_convoy, rep_rndstrm, child_nodes):
"""ParentNode is a subclass of Node but also takes an additional child_nodes parameter.
#type child_nodes: dict[str: child_node.ChildNode]
"""
node.Node.__init__(self, name, node_type, FSPs, CTT_distributions, TMR_distributions,
prob_on_convoy, rep_rndstrm)
self.network_status = 'parent'
self.child_nodes = child_nodes
The issue is that when I hover over self.child_nodes or child_nodes, the inferred type is shown as Any instead of Dict[str, ChildNode]. I don't understand why the typehinting I have in the docstring does not work in this case.
Replace
dict[str: child_node.ChildNode]
with
dict[str, child_node.ChildNode]

Python: How to decode enum type from json

class MSG_TYPE(IntEnum):
REQUEST = 0
GRANT = 1
RELEASE = 2
FAIL = 3
INQUIRE = 4
YIELD = 5
def __json__(self):
return str(self)
class MessageEncoder(JSONEncoder):
def default(self, obj):
return obj.__json__()
class Message(object):
def __init__(self, msg_type, src, dest, data):
self.msg_type = msg_type
self.src = src
self.dest = dest
self.data = data
def __json__(self):
return dict (\
msg_type=self.msg_type, \
src=self.src, \
dest=self.dest, \
data=self.data,\
)
def ToJSON(self):
return json.dumps(self, cls=MessageEncoder)
msg = Message(msg_type=MSG_TYPE.FAIL, src=0, dest=1, data="hello world")
encoded_msg = msg.ToJSON()
decoded_msg = yaml.load(encoded_msg)
print type(decoded_msg['msg_type'])
When calling print type(decoded_msg['msg_type']), I get the result <type 'str'> instead of the original MSG_TYPTE type. I feel like I should also write a custom json decoder but kind of confused how to do that. Any ideas? Thanks.
When calling print type(decoded_msg['msg_type']), I get the result instead of the original MSG_TYPTE type.
Well, yeah, that's because you told MSG_TYPE to encode itself like this:
def __json__(self):
return str(self)
So, that's obviously going to decode back to a string. If you don't want that, come up with some unique way to encode the values, instead of just encoding their string representations.
The most common way to do this is to encode all of your custom types (including your enum types) using some specialized form of object—just like you've done for Message. For example, you might put a py-type field in the object which encodes the type of your object, and then the meanings of the other fields all depend on the type. Ideally you'll want to abstract out the commonalities instead of hardcoding the same thing 100 times, of course.
I feel like I should also write a custom json decoder but kind of confused how to do that.
Well, have you read the documentation? Where exactly are you confused? You're not going to get a complete tutorial by tacking on a followup to a StackOverflow question…
Assuming you've got a special object structure for all your types, you can use an object_hook to decode the values back to the originals. For example, as a quick hack:
class MessageEncoder(JSONEncoder):
def default(self, obj):
return {'py-type': type(obj).__name__, 'value': obj.__json__()}
class MessageDecoder(JSONDecoder):
def __init__(self, hook=None, *args, **kwargs):
if hook is None: hook = self.hook
return super().__init__(hook, *args, **kwargs)
def hook(self, obj):
if isinstance(obj, dict):
pytype = obj.get('py-type')
if pytype:
t = globals()[pytype]
return t.__unjson__(**obj['value'])
return obj
And now, in your Message class:
#classmethod
def __unjson__(cls, msg_type, src, dest, data):
return cls(msg_type, src, dest, data)
And you need a MSG_TYPE.__json__ that returns a dict, maybe just {'name': str(self)}, then an __unjson__ that does something like getattr(cls, name).
A real-life solution should probably either have the classes register themselves instead of looking them up by name, or should handle looking them up by qualified name instead of just going to globals(). And you may want to let things encode to something other than object—or, if not, to just cram py-type into the object instead of wrapping it in another one. And there may be other ways to make the JSON more compact and/or readable. And a little bit of error handling would be nice. And so on.
You may want to look at the implementation of jsonpickle—not because you want to do the exact same thing it does, but to see how it hooks up all the pieces.
Overriding the default method of the encoder won't matter in this case because your object never gets passed to the method. It's treated as an int.
If you run the encoder on its own:
msg_type = MSG_TYPE.RELEASE
MessageEncoder().encode(msg_type)
You'll get:
'MSG_TYPE.RELEASE'
If you can, use an Enum and you shouldn't have any issues. I also asked a similar question:
How do I serialize IntEnum from enum34 to json in python?

Categories

Resources