Related
I am working with htcondor python bindings (https://htcondor.readthedocs.io/en/latest/apis/python-bindings/index.html)
You don't need to know htcondor, but for reference I am working with the
htcondor.JobEvent class to get the data that I want.
From the description of that object it follows that it behaves like a dictionary,
but it has no __dict__ property.
Basically you can't tell what this object is, because it's translated from C++ into python, hence I want to wrap it with all it's functionalities to add more functionalities.
The way I am solving this atm is:
class HTCJobEventWrapper:
"""
Wrapper for HTCondor JobEvent.
Extracts event number and time_stamp of an event.
The wrapped event can be printed to the terminal for dev purpose.
:param job_event: HTCJobEvent
"""
def __init__(self, job_event: HTCJobEvent):
self.wrapped_class = job_event
self.event_number = job_event.get('EventTypeNumber')
self.time_stamp = date_time.strptime(
job_event.get('EventTime'),
STRP_FORMAT
)
def __getattr__(self, attr):
return getattr(self.wrapped_class, attr)
def get(self, *args, **kwargs):
"""Wraps wrapped_class get function."""
return self.wrapped_class.get(*args, **kwargs)
def items(self):
"""Wraps wrapped_class items method."""
return self.wrapped_class.items()
def keys(self):
"""Wraps wrapped_class keys method."""
return self.wrapped_class.keys()
def values(self):
"""Wraps wrapped_class values method."""
return self.wrapped_class.values()
def to_dict(self):
"""Turns wrapped_class items into a dictionary."""
return dict(self.items())
def __repr__(self):
return json.dumps(
self.to_dict(),
indent=2
)
With this it's possible to get any attribute and to use the methods described in the documentation.
However as you can see HTCJobEventWrapper is not of type htcondor.JobEvent
and is not inheriting from it.
If you try to instatiate a htcondor.JobEvent class it results in the following error: RuntimeError: This class cannot be instantiated from Python.
What I want:
I would like it to be a child class which copies a given htcondor.JobEvent object completely
and adds the functionalities I want and returns a HTCJobEventWrapper object
This kind of relates to this question: completely wrap an object in python
Is there a pythonic way to dynamically call every attribute, function or method on self.wrapped_class ? Just like with getattr ?
But in this case I've tried getattr but it works only for attributes.
The regular way of JSON-serializing custom non-serializable objects is to subclass json.JSONEncoder and then pass a custom encoder to json.dumps().
It usually looks like this:
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to __str__. Perhaps a __json__ field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
As I said in a comment to your question, after looking at the json module's source code, it does not appear to lend itself to doing what you want. However the goal could be achieved by what is known as monkey-patching
(see question What is a monkey patch?).
This could be done in your package's __init__.py initialization script and would affect all subsequent json module serialization since modules are generally only loaded once and the result is cached in sys.modules.
The patch changes the default json encoder's default method—the default default().
Here's an example implemented as a standalone module for simplicity's sake:
Module: make_json_serializable.py
""" Module that monkey-patches json module when it's imported so
JSONEncoder.default() automatically checks for a special "to_json()"
method and uses it to encode the object if found.
"""
from json import JSONEncoder
def _default(self, obj):
return getattr(obj.__class__, "to_json", _default.default)(obj)
_default.default = JSONEncoder.default # Save unmodified default.
JSONEncoder.default = _default # Replace it.
Using it is trivial since the patch is applied by simply importing the module.
Sample client script:
import json
import make_json_serializable # apply monkey-patch
class Foo(object):
def __init__(self, name):
self.name = name
def to_json(self): # New special method.
""" Convert to JSON format string representation. """
return '{"name": "%s"}' % self.name
foo = Foo('sazpaz')
print(json.dumps(foo)) # -> "{\"name\": \"sazpaz\"}"
To retain the object type information, the special method can also include it in the string returned:
return ('{"type": "%s", "name": "%s"}' %
(self.__class__.__name__, self.name))
Which produces the following JSON that now includes the class name:
"{\"type\": \"Foo\", \"name\": \"sazpaz\"}"
Magick Lies Here
Even better than having the replacement default() look for a specially named method, would be for it to be able to serialize most Python objects automatically, including user-defined class instances, without needing to add a special method. After researching a number of alternatives, the following — based on an answer by #Raymond Hettinger to another question — which uses the pickle module, seemed closest to that ideal to me:
Module: make_json_serializable2.py
""" Module that imports the json module and monkey-patches it so
JSONEncoder.default() automatically pickles any Python objects
encountered that aren't standard JSON data types.
"""
from json import JSONEncoder
import pickle
def _default(self, obj):
return {'_python_object': pickle.dumps(obj)}
JSONEncoder.default = _default # Replace with the above.
Of course everything can't be pickled—extension types for example. However there are ways defined to handle them via the pickle protocol by writing special methods—similar to what you suggested and I described earlier—but doing that would likely be necessary for a far fewer number of cases.
Deserializing
Regardless, using the pickle protocol also means it would be fairly easy to reconstruct the original Python object by providing a custom object_hook function argument on any json.loads() calls that used any '_python_object' key in the dictionary passed in, whenever it has one. Something like:
def as_python_object(dct):
try:
return pickle.loads(str(dct['_python_object']))
except KeyError:
return dct
pyobj = json.loads(json_str, object_hook=as_python_object)
If this has to be done in many places, it might be worthwhile to define a wrapper function that automatically supplied the extra keyword argument:
json_pkloads = functools.partial(json.loads, object_hook=as_python_object)
pyobj = json_pkloads(json_str)
Naturally, this could be monkey-patched it into the json module as well, making the function the default object_hook (instead of None).
I got the idea for using pickle from an answer by Raymond Hettinger to another JSON serialization question, whom I consider exceptionally credible as well as an official source (as in Python core developer).
Portability to Python 3
The code above does not work as shown in Python 3 because json.dumps() returns a bytes object which the JSONEncoder can't handle. However the approach is still valid. A simple way to workaround the issue is to latin1 "decode" the value returned from pickle.dumps() and then "encode" it from latin1 before passing it on to pickle.loads() in the as_python_object() function. This works because arbitrary binary strings are valid latin1 which can always be decoded to Unicode and then encoded back to the original string again (as pointed out in this answer by Sven Marnach).
(Although the following works fine in Python 2, the latin1 decoding and encoding it does is superfluous.)
from decimal import Decimal
class PythonObjectEncoder(json.JSONEncoder):
def default(self, obj):
return {'_python_object': pickle.dumps(obj).decode('latin1')}
def as_python_object(dct):
try:
return pickle.loads(dct['_python_object'].encode('latin1'))
except KeyError:
return dct
class Foo(object): # Some user-defined class.
def __init__(self, name):
self.name = name
def __eq__(self, other):
if type(other) is type(self): # Instances of same class?
return self.name == other.name
return NotImplemented
__hash__ = None
data = [1,2,3, set(['knights', 'who', 'say', 'ni']), {'key':'value'},
Foo('Bar'), Decimal('3.141592653589793238462643383279502884197169')]
j = json.dumps(data, cls=PythonObjectEncoder, indent=4)
data2 = json.loads(j, object_hook=as_python_object)
assert data == data2 # both should be same
You can extend the dict class like so:
#!/usr/local/bin/python3
import json
class Serializable(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# hack to fix _json.so make_encoder serialize properly
self.__setitem__('dummy', 1)
def _myattrs(self):
return [
(x, self._repr(getattr(self, x)))
for x in self.__dir__()
if x not in Serializable().__dir__()
]
def _repr(self, value):
if isinstance(value, (str, int, float, list, tuple, dict)):
return value
else:
return repr(value)
def __repr__(self):
return '<%s.%s object at %s>' % (
self.__class__.__module__,
self.__class__.__name__,
hex(id(self))
)
def keys(self):
return iter([x[0] for x in self._myattrs()])
def values(self):
return iter([x[1] for x in self._myattrs()])
def items(self):
return iter(self._myattrs())
Now to make your classes serializable with the regular encoder, extend 'Serializable':
class MySerializableClass(Serializable):
attr_1 = 'first attribute'
attr_2 = 23
def my_function(self):
print('do something here')
obj = MySerializableClass()
print(obj) will print something like:
<__main__.MySerializableClass object at 0x1073525e8>
print(json.dumps(obj, indent=4)) will print something like:
{
"attr_1": "first attribute",
"attr_2": 23,
"my_function": "<bound method MySerializableClass.my_function of <__main__.MySerializableClass object at 0x1073525e8>>"
}
I suggest putting the hack into the class definition. This way, once the class is defined, it supports JSON. Example:
import json
class MyClass( object ):
def _jsonSupport( *args ):
def default( self, xObject ):
return { 'type': 'MyClass', 'name': xObject.name() }
def objectHook( obj ):
if 'type' not in obj:
return obj
if obj[ 'type' ] != 'MyClass':
return obj
return MyClass( obj[ 'name' ] )
json.JSONEncoder.default = default
json._default_decoder = json.JSONDecoder( object_hook = objectHook )
_jsonSupport()
def __init__( self, name ):
self._name = name
def name( self ):
return self._name
def __repr__( self ):
return '<MyClass(name=%s)>' % self._name
myObject = MyClass( 'Magneto' )
jsonString = json.dumps( [ myObject, 'some', { 'other': 'objects' } ] )
print "json representation:", jsonString
decoded = json.loads( jsonString )
print "after decoding, our object is the first in the list", decoded[ 0 ]
The problem with overriding JSONEncoder().default is that you can do it only once. If you stumble upon anything a special data type that does not work with that pattern (like if you use a strange encoding). With the pattern below, you can always make your class JSON serializable, provided that the class field you want to serialize is serializable itself (and can be added to a python list, barely anything). Otherwise, you have to apply recursively the same pattern to your json field (or extract the serializable data from it):
# base class that will make all derivatives JSON serializable:
class JSONSerializable(list): # need to derive from a serializable class.
def __init__(self, value = None):
self = [ value ]
def setJSONSerializableValue(self, value):
self = [ value ]
def getJSONSerializableValue(self):
return self[1] if len(self) else None
# derive your classes from JSONSerializable:
class MyJSONSerializableObject(JSONSerializable):
def __init__(self): # or any other function
# ....
# suppose your__json__field is the class member to be serialized.
# it has to be serializable itself.
# Every time you want to set it, call this function:
self.setJSONSerializableValue(your__json__field)
# ...
# ... and when you need access to it, get this way:
do_something_with_your__json__field(self.getJSONSerializableValue())
# now you have a JSON default-serializable class:
a = MyJSONSerializableObject()
print json.dumps(a)
I don't understand why you can't write a serialize function for your own class? You implement the custom encoder inside the class itself and allow "people" to call the serialize function that will essentially return self.__dict__ with functions stripped out.
edit:
This question agrees with me, that the most simple way is write your own method and return the json serialized data that you want. They also recommend to try jsonpickle, but now you're adding an additional dependency for beauty when the correct solution comes built in.
For production environment, prepare rather own module of json with your own custom encoder, to make it clear that you overrides something.
Monkey-patch is not recommended, but you can do monkey patch in your testenv.
For example,
class JSONDatetimeAndPhonesEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, (datetime.date, datetime.datetime)):
return obj.date().isoformat()
elif isinstance(obj, basestring):
try:
number = phonenumbers.parse(obj)
except phonenumbers.NumberParseException:
return json.JSONEncoder.default(self, obj)
else:
return phonenumbers.format_number(number, phonenumbers.PhoneNumberFormat.NATIONAL)
else:
return json.JSONEncoder.default(self, obj)
you want:
payload = json.dumps(your_data, cls=JSONDatetimeAndPhonesEncoder)
or:
payload = your_dumps(your_data)
or:
payload = your_json.dumps(your_data)
however in testing environment, go a head:
#pytest.fixture(scope='session', autouse=True)
def testenv_monkey_patching():
json._default_encoder = JSONDatetimeAndPhonesEncoder()
which will apply your encoder to all json.dumps occurrences.
I have an external API that I cannot modify. For each call to this API, I need to be able to perform an operation before and after. This API is used like this:
def get_api():
"""
Return an initiated ClassAPI object
"""
token = Token.objects.last()
api = ClassAPI(
settings.CLASS_API_ID,
settings.CLASS_API_SECRET,
last_token)
return api
get_api() is called everywhere in the code and the result is then used to perform request (like: api.book_store.get(id=book_id)).
My goal is to return a virtual object that will perform the same operations than the ClassAPI adding print "Before" and print "After".
The ClassAPI looks like this:
class ClassAPI
class BookStore
def get(...)
def list(...)
class PenStore
def get(...)
def list(...)
I tried to create a class inheriting from (ClassApi, object) [as ClassAPI doesn't inherit from object] and add to this class a metaclass that decorates all the methods, but I cannot impact the methods from BookStore (get and list)
Any idea about how to perform this modifying only get_api() and adding additional classes? I try to avoid copying the whole structure of the API to add my operations.
I am using python 2.7.
You could do this with a Proxy:
class Proxy:
def __init__(self, other):
self.other = other
self.calls = []
def __getattr__(self, name):
self.calls.append(name)
return self
def __call__(self, *args, **kwargs):
self.before()
ret = self.call_proxied(*args, **kwargs)
self.after()
return ret
def call_proxied(self, *args, **kwargs):
other = self.other
calls = self.calls
self.calls = []
for item in calls:
other = getattr(other, item)
return other(*args, **kwargs)
This class intercepts unknown members in the __getattr__() method, saving the names that are used and returning itself.
When a method is called (eg. api.book_store.get(id=book_id) ), it calls a before() and after() method on itself and in between it fetches the members of other and forwards the arguments in a call.
You use this class like this:
def get_api():
...
return Proxy(api)
Update: corrected the call to self.call_proxied(*args, **kwargs). Also allow any return value to be returned.
I'm addicted in reading libraries. I like the way their codes are structed and beautiful and most important: readable. I'm trying to learn by doing that. But, sometimes lines like this:
something = property(lambda self: object())
catch my eyes on!
I was inside _socket.py and this class:
class error(Exception):
""" Base class for I/O related errors. """
def __init__(self, *args, **kwargs): # real signature unknown
pass
#staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __reduce__(self, *args, **kwargs): # real signature unknown
pass
def __str__(self): # real signature unknown; restored from __doc__
""" x.__str__() <==> str(x) """
pass
characters_written = property(lambda self: object()) # default
errno = property(lambda self: object()) # default
filename = property(lambda self: object()) # default
strerror = property(lambda self: object()) # default
The grant curiosity is over those 4 last lines containing lambda on them. The questions are: How that works? What are their meaning, their results? Can you show an example of that statement on a simple way?
Thanks for now!
First of all I would recommend reading the python documentation about properties. They are usually used to create fake attribute.
errno = property(lambda self: object()) # default
In your case, you only define a getter (no setter of deleter) for this attribute so errno is read only. And at each read it returns an brand new object. This is probably not very meaningful, but the rest of the library is probably expecting to have an errno variable.
property is a built-in. It's usually used as a decorator. That code is equivalent to this, which might look a bit more familiar:
class error(Exception):
#...
#property
def characters_written(self):
return object()
#property
def errno(self):
return object()
#property
def filename(self):
return object()
#property
def strerror(self):
return object()
Still, it doesn't look particularly useful. It means that every time you try to retrieve any of those attributes on an instance of this error class you'll get back a new unique object instance.
they look more like placeholders - perhaps unsupported implementations. they return useless objects. they seem to be suitable when you need a non-None value.
The regular way of JSON-serializing custom non-serializable objects is to subclass json.JSONEncoder and then pass a custom encoder to json.dumps().
It usually looks like this:
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to __str__. Perhaps a __json__ field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
As I said in a comment to your question, after looking at the json module's source code, it does not appear to lend itself to doing what you want. However the goal could be achieved by what is known as monkey-patching
(see question What is a monkey patch?).
This could be done in your package's __init__.py initialization script and would affect all subsequent json module serialization since modules are generally only loaded once and the result is cached in sys.modules.
The patch changes the default json encoder's default method—the default default().
Here's an example implemented as a standalone module for simplicity's sake:
Module: make_json_serializable.py
""" Module that monkey-patches json module when it's imported so
JSONEncoder.default() automatically checks for a special "to_json()"
method and uses it to encode the object if found.
"""
from json import JSONEncoder
def _default(self, obj):
return getattr(obj.__class__, "to_json", _default.default)(obj)
_default.default = JSONEncoder.default # Save unmodified default.
JSONEncoder.default = _default # Replace it.
Using it is trivial since the patch is applied by simply importing the module.
Sample client script:
import json
import make_json_serializable # apply monkey-patch
class Foo(object):
def __init__(self, name):
self.name = name
def to_json(self): # New special method.
""" Convert to JSON format string representation. """
return '{"name": "%s"}' % self.name
foo = Foo('sazpaz')
print(json.dumps(foo)) # -> "{\"name\": \"sazpaz\"}"
To retain the object type information, the special method can also include it in the string returned:
return ('{"type": "%s", "name": "%s"}' %
(self.__class__.__name__, self.name))
Which produces the following JSON that now includes the class name:
"{\"type\": \"Foo\", \"name\": \"sazpaz\"}"
Magick Lies Here
Even better than having the replacement default() look for a specially named method, would be for it to be able to serialize most Python objects automatically, including user-defined class instances, without needing to add a special method. After researching a number of alternatives, the following — based on an answer by #Raymond Hettinger to another question — which uses the pickle module, seemed closest to that ideal to me:
Module: make_json_serializable2.py
""" Module that imports the json module and monkey-patches it so
JSONEncoder.default() automatically pickles any Python objects
encountered that aren't standard JSON data types.
"""
from json import JSONEncoder
import pickle
def _default(self, obj):
return {'_python_object': pickle.dumps(obj)}
JSONEncoder.default = _default # Replace with the above.
Of course everything can't be pickled—extension types for example. However there are ways defined to handle them via the pickle protocol by writing special methods—similar to what you suggested and I described earlier—but doing that would likely be necessary for a far fewer number of cases.
Deserializing
Regardless, using the pickle protocol also means it would be fairly easy to reconstruct the original Python object by providing a custom object_hook function argument on any json.loads() calls that used any '_python_object' key in the dictionary passed in, whenever it has one. Something like:
def as_python_object(dct):
try:
return pickle.loads(str(dct['_python_object']))
except KeyError:
return dct
pyobj = json.loads(json_str, object_hook=as_python_object)
If this has to be done in many places, it might be worthwhile to define a wrapper function that automatically supplied the extra keyword argument:
json_pkloads = functools.partial(json.loads, object_hook=as_python_object)
pyobj = json_pkloads(json_str)
Naturally, this could be monkey-patched it into the json module as well, making the function the default object_hook (instead of None).
I got the idea for using pickle from an answer by Raymond Hettinger to another JSON serialization question, whom I consider exceptionally credible as well as an official source (as in Python core developer).
Portability to Python 3
The code above does not work as shown in Python 3 because json.dumps() returns a bytes object which the JSONEncoder can't handle. However the approach is still valid. A simple way to workaround the issue is to latin1 "decode" the value returned from pickle.dumps() and then "encode" it from latin1 before passing it on to pickle.loads() in the as_python_object() function. This works because arbitrary binary strings are valid latin1 which can always be decoded to Unicode and then encoded back to the original string again (as pointed out in this answer by Sven Marnach).
(Although the following works fine in Python 2, the latin1 decoding and encoding it does is superfluous.)
from decimal import Decimal
class PythonObjectEncoder(json.JSONEncoder):
def default(self, obj):
return {'_python_object': pickle.dumps(obj).decode('latin1')}
def as_python_object(dct):
try:
return pickle.loads(dct['_python_object'].encode('latin1'))
except KeyError:
return dct
class Foo(object): # Some user-defined class.
def __init__(self, name):
self.name = name
def __eq__(self, other):
if type(other) is type(self): # Instances of same class?
return self.name == other.name
return NotImplemented
__hash__ = None
data = [1,2,3, set(['knights', 'who', 'say', 'ni']), {'key':'value'},
Foo('Bar'), Decimal('3.141592653589793238462643383279502884197169')]
j = json.dumps(data, cls=PythonObjectEncoder, indent=4)
data2 = json.loads(j, object_hook=as_python_object)
assert data == data2 # both should be same
You can extend the dict class like so:
#!/usr/local/bin/python3
import json
class Serializable(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# hack to fix _json.so make_encoder serialize properly
self.__setitem__('dummy', 1)
def _myattrs(self):
return [
(x, self._repr(getattr(self, x)))
for x in self.__dir__()
if x not in Serializable().__dir__()
]
def _repr(self, value):
if isinstance(value, (str, int, float, list, tuple, dict)):
return value
else:
return repr(value)
def __repr__(self):
return '<%s.%s object at %s>' % (
self.__class__.__module__,
self.__class__.__name__,
hex(id(self))
)
def keys(self):
return iter([x[0] for x in self._myattrs()])
def values(self):
return iter([x[1] for x in self._myattrs()])
def items(self):
return iter(self._myattrs())
Now to make your classes serializable with the regular encoder, extend 'Serializable':
class MySerializableClass(Serializable):
attr_1 = 'first attribute'
attr_2 = 23
def my_function(self):
print('do something here')
obj = MySerializableClass()
print(obj) will print something like:
<__main__.MySerializableClass object at 0x1073525e8>
print(json.dumps(obj, indent=4)) will print something like:
{
"attr_1": "first attribute",
"attr_2": 23,
"my_function": "<bound method MySerializableClass.my_function of <__main__.MySerializableClass object at 0x1073525e8>>"
}
I suggest putting the hack into the class definition. This way, once the class is defined, it supports JSON. Example:
import json
class MyClass( object ):
def _jsonSupport( *args ):
def default( self, xObject ):
return { 'type': 'MyClass', 'name': xObject.name() }
def objectHook( obj ):
if 'type' not in obj:
return obj
if obj[ 'type' ] != 'MyClass':
return obj
return MyClass( obj[ 'name' ] )
json.JSONEncoder.default = default
json._default_decoder = json.JSONDecoder( object_hook = objectHook )
_jsonSupport()
def __init__( self, name ):
self._name = name
def name( self ):
return self._name
def __repr__( self ):
return '<MyClass(name=%s)>' % self._name
myObject = MyClass( 'Magneto' )
jsonString = json.dumps( [ myObject, 'some', { 'other': 'objects' } ] )
print "json representation:", jsonString
decoded = json.loads( jsonString )
print "after decoding, our object is the first in the list", decoded[ 0 ]
The problem with overriding JSONEncoder().default is that you can do it only once. If you stumble upon anything a special data type that does not work with that pattern (like if you use a strange encoding). With the pattern below, you can always make your class JSON serializable, provided that the class field you want to serialize is serializable itself (and can be added to a python list, barely anything). Otherwise, you have to apply recursively the same pattern to your json field (or extract the serializable data from it):
# base class that will make all derivatives JSON serializable:
class JSONSerializable(list): # need to derive from a serializable class.
def __init__(self, value = None):
self = [ value ]
def setJSONSerializableValue(self, value):
self = [ value ]
def getJSONSerializableValue(self):
return self[1] if len(self) else None
# derive your classes from JSONSerializable:
class MyJSONSerializableObject(JSONSerializable):
def __init__(self): # or any other function
# ....
# suppose your__json__field is the class member to be serialized.
# it has to be serializable itself.
# Every time you want to set it, call this function:
self.setJSONSerializableValue(your__json__field)
# ...
# ... and when you need access to it, get this way:
do_something_with_your__json__field(self.getJSONSerializableValue())
# now you have a JSON default-serializable class:
a = MyJSONSerializableObject()
print json.dumps(a)
I don't understand why you can't write a serialize function for your own class? You implement the custom encoder inside the class itself and allow "people" to call the serialize function that will essentially return self.__dict__ with functions stripped out.
edit:
This question agrees with me, that the most simple way is write your own method and return the json serialized data that you want. They also recommend to try jsonpickle, but now you're adding an additional dependency for beauty when the correct solution comes built in.
For production environment, prepare rather own module of json with your own custom encoder, to make it clear that you overrides something.
Monkey-patch is not recommended, but you can do monkey patch in your testenv.
For example,
class JSONDatetimeAndPhonesEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, (datetime.date, datetime.datetime)):
return obj.date().isoformat()
elif isinstance(obj, basestring):
try:
number = phonenumbers.parse(obj)
except phonenumbers.NumberParseException:
return json.JSONEncoder.default(self, obj)
else:
return phonenumbers.format_number(number, phonenumbers.PhoneNumberFormat.NATIONAL)
else:
return json.JSONEncoder.default(self, obj)
you want:
payload = json.dumps(your_data, cls=JSONDatetimeAndPhonesEncoder)
or:
payload = your_dumps(your_data)
or:
payload = your_json.dumps(your_data)
however in testing environment, go a head:
#pytest.fixture(scope='session', autouse=True)
def testenv_monkey_patching():
json._default_encoder = JSONDatetimeAndPhonesEncoder()
which will apply your encoder to all json.dumps occurrences.