I was wondering if it possible to use the attrs library to convert nested JSONs to Python class instances so that I can access attributes in that JSON via dot notation (object.attribute.nested_attribute).
My JSONs have a fixed schema, and I would be fine with having to define the classes for that schema manually, but I'm not sure if it would be possible to turn the JSON into the nested class structure without having to instantiate every nested object individually. I'm basically looking for a fromdict() function that knows (based on the keys) which class to turn a JSON object into.
(I also know that there are other ways to build 'DotDicts', but these seem always a bit hacky to me and would probably need thorough testing to verify that they work correctly.)
The attrs wiki currently has two serialization libraries:
cattrs
and related.
With cattrs being maintained by one of attrs’ most prolific contributors.
I know that some people mention integrations with other systems too. At this point it's unlikely that attrs will grow an own solution since the externally developed look pretty good.
Related
I am running a graphql query using aiographql-client and getting back a GraphQLResponse object, which contains a raw dict as part of the response json data.
This dictionary conforms to a schema, which I am able to parse into a graphql.type.schema.GraphQLSchema type using graphql-core's build_schema method.
I can also correctly get the GraphQLObjectType of the object that is being returned, however I am not sure how to properly deserialize the dictionary into a python object with all the appropriate fields, using the GraphQLObjectType as a reference.
Any help would be greatly appreciated!
I'd recommend using Pydantic to the heavy lifting in the parsing.
You can then either generate the models beforehand and select the ones you need based on GraphQLObjectType or generate them at runtime based on the definition returned by build_schema.
If you really must define your models at runtime you can do that with pydantic's create_model function described here : https://pydantic-docs.helpmanual.io/usage/models/#dynamic-model-creation
For the static model generation you can probably leverage something like https://jsontopydantic.com/
If you share some code samples I'd be happy to give some more insights on the actual implementation
I faced the same tedious problem while developing a personal project.
Because of that I just published a library which has the purpose of managing the mappings between python objects and graphql objects (python objects -> graphql query and graphql response -> python objects).
https://github.com/dapalex/py-graphql-mapper
So far it manages only basic query and response, if it will become useful I will keep implementing more features.
Try to have a look and see if it can help you
Coming back to this, there are some projects out there trying to achieve this functionality:
https://github.com/enra-GmbH/graphql-codegen-ariadne
https://github.com/sauldom102/gql_schema_codegen
I am currently working on a GraphQL API using aws-appsync and lambda-resolvers written in python. Inside my lambdas, I am working with dicts only. With a growing project size, it becomes hard to follow the logic without knowledge of the graphql.schema. Therefore, I would like to use the corresponding python class representations of my GraphQL types/inputs.
I am looking for a simple library which transforms the types and inputs from my graphql.schema to python classes, optimally leveraging something like pydantic.
On the web, I only found full client/server libraries like for example Strawberry and Ariadne.
Does a library exist that satisfies my needs, or do I need to implement it on my own?
Not sure if it can help but I just published a kind of python-graphql mapper transforming graphql responses in Python objects (having declared the python class and it's fields) and vice versa.
https://github.com/dapalex/py-graphql-mapper
So far it manages only basic query and response, if it will become useful I will keep implementing more features.
Try to have a look and see if it can help you
This question already has answers here:
How to make a class JSON serializable
(41 answers)
Closed 6 months ago.
I'm noting that the methods I am looking at to serialize a variable into JSON in python don't really seem to handle it all that well, and for my purpose I just want to quickly dump an objects contents into a string format so I can pick out what I actually want to write custom code to handle. I want to be able to dump the main fields at the very least of any class I pass the python serializer and really if its worth the name this should work.
So take the following code:
import json
c = SomeClass()
#causes an error if any field in someclass has another class instance.
json.dumps(c)
leads to..
TypeError: Object of type {Type} is not JSON serializable
Are there any modules other people have used that would solve my problem ? I really don't see how there would not be. Or maybe one might explain how to circumvent this error ?
The goal is to simply get some output to look at. If I wrote a recursion loop in c# using reflection, excepting circular references, it wouldn't be difficult, so I cannot imagine python users have never tackled this exact issue and I'm not satisfied with the answers that I have seen in older posts which seem to suggest a lot of custom tinkering for something seems to be designed in spirit to just dump any old object's contents out.
I don't even need complex traversal is the funny part, though it would be nice. I just need a dump of the property values which are primitive types in many cases. I know this is possible because the debugger does it.
Additionally I looked at one of the methods given indicating to use default lambda to specify how the json serializer should descend into the object:
json.dumps(o, default=lambda k: k.__dict__)
and the object does not contain the standard dict member.
in the end I just ended up writing a class to do this.
edit:
Here use this now you can one way serialize a class structure with this nifty little bit of code that I added to address my problem with f**** discord.py !
end edit
There is no fire and forget option that would disentangle a mass of information.
The way of creating this solution would be to manage seperate lists of subclasses to make sure not to recurse until a stackoverflow is reached.
The slots_ can be used with getattr(o,name) when hasattr(o,'dict') is False.
But the answer is you'd have to create a solution that basically does the job that the json serializer should be doing and cut out circular reference by determining the unique complex types and writing them in seperate tabular entries in the json file and replacing them in the referencing classes with ids.
That way you could cross reference these objects while glancing at them.
However the short answer is no. Python does not offer an out of the box way of doing this and all the provided answers encountered thus far only solve a single use-case or scenario, and do not create a incorporated solution to the problem which the above mentioned algorithm WOULD by NORMALIZING the class data into unique elements.
I have a large-ish YAML file (~40 lines) that I'm loading using PyYAML. This is of course parsed into a large-ish dictionary plus a couple of arrays.
My question is: how to manage the data. I can of course leave it in the output dictionary and work through the data. But I was wondering if it's better instead to mangle the data in a class or use a nametuple to hold the data.
Any first-hand experience about that?
Whether you post-process the data structure into a class or not primarily has to do with how you are using that data. The same applies to the decision whether to use a tag or not and load (some off) the data from the YAML file into a specific instance of a class that way.
The primary advantage of using a class in both cases (post-processing, tagging) is that you can do additional tests during initialisation for consistency, that are not done on the key-value pairs of a dict or on the items of list.
A class also allows you to provide methods to check values before they are set, e.g. to make sure they are of the right type.
Whether that overhead is necessary depends on the project, who is using and/or updating the data etc and how long this project and its data is going to live (i.e. are you still going to understand the data and its implicit structure a year from now). These are all issues for which a well designed (and documented) class can help, at the cost of some extra work up-front.
following an earlier question I asked here (Most appropriate way to combine features of a class to another?) I got an answer that I finally grown to understand. In short what I intend to now is have a bunch of dictionaries, each dictionary will look somewhat like this:
{ "url": "http://....", "parser": SomeParserClass }
though more properties might be added later but will include either strings or some other classes.
Now my question is: what's the best way to save these objects?
I thought up of 3 solutions, not sure which one is the best and if there are any other more acceptable solutions.
Use pickle, while it seems efficient to use it would make editing any of these dictionaries a pain, since it's saved in binary format.
Save each dictionary in a separate module and import these modules dynamically from a single directory, each module would either have a function inside it to return the dictionary or a specially crafted variable name to hold it so I could call it from my loading code. This seems the easier the edit but doesn't sound very efficient or pythonic
Use some sort of database like MongoDB or Riak to save these objects, my problem with this one is either editing which is doable but doesn't sound like fun and the fact that the former 2 are equipped with means to correctly save my parser class inside the dictionary, I have no idea how these databases serialize or 'pickle' such objects.
As you see my main concerns are how easy would it be to edit them, the efficiency of saving and retrieving the data (though not a huge concern since I only have a couple of hundreds of these) and the correctness of the solution.
So, any thoughts?
Thank you in advance for any help you might be able to provide.
Use JSON. It supports python dictionaries and can be easily edited.
You can try shelve. It's built on top of pickle and let's you serialize objects and associate them to string keys.
Because it is based on dbm, it will only access key/values as you need them. So if you only need to access a few items from a large dictionary, shelve may be a better choice than json, which has to load the entire JSON file into a dictionary first.