Convert GraphQLResponse dictionary to python object - python

I am running a graphql query using aiographql-client and getting back a GraphQLResponse object, which contains a raw dict as part of the response json data.
This dictionary conforms to a schema, which I am able to parse into a graphql.type.schema.GraphQLSchema type using graphql-core's build_schema method.
I can also correctly get the GraphQLObjectType of the object that is being returned, however I am not sure how to properly deserialize the dictionary into a python object with all the appropriate fields, using the GraphQLObjectType as a reference.
Any help would be greatly appreciated!

I'd recommend using Pydantic to the heavy lifting in the parsing.
You can then either generate the models beforehand and select the ones you need based on GraphQLObjectType or generate them at runtime based on the definition returned by build_schema.
If you really must define your models at runtime you can do that with pydantic's create_model function described here : https://pydantic-docs.helpmanual.io/usage/models/#dynamic-model-creation
For the static model generation you can probably leverage something like https://jsontopydantic.com/
If you share some code samples I'd be happy to give some more insights on the actual implementation

I faced the same tedious problem while developing a personal project.
Because of that I just published a library which has the purpose of managing the mappings between python objects and graphql objects (python objects -> graphql query and graphql response -> python objects).
https://github.com/dapalex/py-graphql-mapper
So far it manages only basic query and response, if it will become useful I will keep implementing more features.
Try to have a look and see if it can help you

Coming back to this, there are some projects out there trying to achieve this functionality:
https://github.com/enra-GmbH/graphql-codegen-ariadne
https://github.com/sauldom102/gql_schema_codegen

Related

Besides automatic documentation, what's the rationale of providing a response model for FastAPI endpoints?

The question is basically in the title: Does providing a bespoke response model serve any further purpose besides clean and intuitive documentation? What's the purpose of defining all these response models for all the endpoints rather than just leaving it empty?
I've started working with FastAPI recently, and I really like it. I'm using FastAPI with a MongoDB backend. I've followed the following approach:
Create a router for a given endpoint-category
Write the endpoint with the decorator etc. This involves the relevant query and defining the desired output of the query.
Then, test and trial everything.
Usually, prior to finalising an endpoint, I would set the response_model in the decorator to something generic, like List (imported from typing). This would look something like this:
#MyRouter.get(
'/the_thing/{the_id}',
response_description="Returns the thing for target id",
response_model=List,
response_model_exclude_unset=True
)
In the swagger-ui documentation, this will result in an uninformative response-model, which looks like this:
So, I end up defining a response-model, which corresponds to the fields I'm returning in my query in the endpoint function; something like this:
class the_thing_out(BaseModel):
id : int
name : str | None
job : str | None
And then, I modify the following: response_model=List[the_thing_out]. This will give a preview of what I can expect to be returned from a given call from within the swagger ui documentation.
Well, to be fair, having an automatically generated OpenAPI-compliant description of your interface is very valuable in and of itself.
Other than that, there is the benefit of data validation in the broader sense, i.e. ensuring that the data that is actually sent to the client conforms to a pre-defined schema. This is why Pydantic is so powerful and FastAPI just utilizes its power to that end.
You define a schema with Pydantic, set it as your response_model and then never have to worry about wrong types or unexpected values or what have you accidentally being introduced in your response data.* If you try to return some invalid data from your route, you'll get an error, instead of the client potentially silently receiving garbage that might mess up the logic on its end.
Now, could you achieve the same thing by just manually instantiating your Pydantic model with the data you want to send yourself first, then generating the JSON and packaging that in an adequate HTTP response?
Sure. But that is just extra steps you have to make for each route. And if you do that three, four, five times, you'll probably come up with an idea to factor out that model instantiation, etc. in a function that is more or less generic over any Pydantic model and data you throw at it... and oh, look! You implemented your own version of the response_model logic. 😉
Now, all this becomes more and more important the more complex your schemas get. Obviously, if all your route does is return something like
{"exists": 1}
then neither validation nor documentation is all that worthwhile. But I would argue it's usually better to prepare in advance for potential growth of whatever application you are developing.
Since you are using MongoDB in the back, I would argue this becomes even more important. I know, people say that it is one of the "perks" of MongoDB that you need no schema for the data you throw at it, but as soon as you provide an endpoint for clients, it would be nice to at least broadly define what the data coming from that endpoint can look like. And once you have that "contract", you just need a way to safeguard yourself against messing up, which is where the aforementioned model validation comes in.
Hope this helps.
* This rests on two assumptions of course: 1) You took great care in defining your schema (incl. validation) and 2) Pydantic works as expected.

Graphql.schema types and inputs to python classes

I am currently working on a GraphQL API using aws-appsync and lambda-resolvers written in python. Inside my lambdas, I am working with dicts only. With a growing project size, it becomes hard to follow the logic without knowledge of the graphql.schema. Therefore, I would like to use the corresponding python class representations of my GraphQL types/inputs.
I am looking for a simple library which transforms the types and inputs from my graphql.schema to python classes, optimally leveraging something like pydantic.
On the web, I only found full client/server libraries like for example Strawberry and Ariadne.
Does a library exist that satisfies my needs, or do I need to implement it on my own?
Not sure if it can help but I just published a kind of python-graphql mapper transforming graphql responses in Python objects (having declared the python class and it's fields) and vice versa.
https://github.com/dapalex/py-graphql-mapper
So far it manages only basic query and response, if it will become useful I will keep implementing more features.
Try to have a look and see if it can help you

How to load an json api in cayley graph db

I have various data located within various data sources like Json ,various apis etc,.Now there is requirement to collate all these data and push it into cayley graph data base.
this will eventually act as an input for a chatbot framework. i am currently not aware of how collate existing data and push it into cayley graph n retrieve cayley graph database.
help needed …
thanks in advance
Unfortunately, Cayley cannot import JSON data directly by design.
The main reason is that it has no way of knowing which values in JSON are node IDs and which are regular string values.
However, it supports JSON-LD format which is the same as regular JSON but includes some additional annotations. These annotations help to solve an uncertainity I mentioned.
I suggest checking JSON-LD Playground examples first and then schema.org for a list of well-known object types. Note that it's also possible to define your own types. See JSON-LD documentation for details.
The last step would be to use Cayley's HTTP API v2 to import the data. Make sure to pass a correct Content-Type header, or use Cayley client that supports JSON-LD.

Using attrs to turn JSONs into Python classes

I was wondering if it possible to use the attrs library to convert nested JSONs to Python class instances so that I can access attributes in that JSON via dot notation (object.attribute.nested_attribute).
My JSONs have a fixed schema, and I would be fine with having to define the classes for that schema manually, but I'm not sure if it would be possible to turn the JSON into the nested class structure without having to instantiate every nested object individually. I'm basically looking for a fromdict() function that knows (based on the keys) which class to turn a JSON object into.
(I also know that there are other ways to build 'DotDicts', but these seem always a bit hacky to me and would probably need thorough testing to verify that they work correctly.)
The attrs wiki currently has two serialization libraries:
cattrs
and related.
With cattrs being maintained by one of attrs’ most prolific contributors.
I know that some people mention integrations with other systems too. At this point it's unlikely that attrs will grow an own solution since the externally developed look pretty good.

Serializing data and unpacking safely from untrusted source

I am using Pyramid as a basis for transfer of data for a turn-based video game. The clients use POST data to present their actions, and GET to retrieve serialized game board data. The game data can sometimes involve strings, but is almost always two integers and two tuples:
gamedata = (userid, gamenumber, (sourcex, sourcey), (destx, desty))
My general client side framework was to Pickle , convert to base 64, use urlencode, and submit the POST. The server then receives the POST, unpacks the single-item dictionary, decodes the base64, and then unpickles the data object.
I want to use Pickle because I can use classes and values. Submitting game data as POST fields can only give me strings.
However, Pickle is regarded as unsafe. So, I turned to pyYAML, which serves the same purpose. Using yaml.safe_load(data), I can serialize data without exposing security flaws. However, the safe_load is VERY safe, I cannot even deserialize harmless tuples or lists, even if they only contain integers.
Is there some middle ground here? Is there a way to serialize python structures without at the same time allowing execution of arbitrary code?
My first thought was to write a wrapper for my send and receive functions that uses underscores in value names to recreate tuples, e.g. sending would convert the dictionary value source : (x, y) to source_0 : x, source_1: y. My second thought was that it wasn't a very wise way to develop.
edit: Here's my implementation using JSON... it doesn't seem as powerful as YAML or Pickle, but I'm still concerned there may be security holes.
Client side was constructed a bit more visibly while I experimented:
import urllib, json, base64
arbitrarydata = { 'id':14, 'gn':25, 'sourcecoord':(10,12), 'destcoord':(8,14)}
jsondata = json.dumps(arbitrarydata)
b64data = base64.urlsafe_b64encode(jsondata)
transmitstring = urllib.urlencode( [ ('data', b64data) ] )
urllib.urlopen('http://127.0.0.1:9000/post', transmitstring).read()
Pyramid Server can retrieve the data objects:
json.loads(base64.urlsafe_b64decode(request.POST['data'].encode('ascii')))
On an unrelated note, I'd love to hear some other opinions about the acceptability of using POST data in this method, my game client is in no way browser based at this time.
Why not use colander for your serialization and deserialization? Colander turns an object schema into simple data structure and vice-versa, and you can use JSON to send and receive this information.
For example:
import colander
class Item(colander.MappingSchema):
thing = colander.SchemaNode(colander.String(),
validator=colander.OneOf(['foo', 'bar']))
flag = colander.SchemaNode(colander.Boolean())
language = colander.SchemaNode(colander.String()
validator=colander.OneOf(supported_languages)
class Items(colander.SequenceSchema):
item = Item()
The above setup defines a list of item objects, but you can easily define game-specific objects too.
Deserialization becomes:
items = Items().deserialize(json.loads(jsondata))
and serialization is:
json.dumps(Items().serialize(items))
Apart from letting you round-trip python objects, it also validates the serialized data to ensure it fits your schema and hasn't been mucked about with.
How about json? The library is part of the standard Python libraries, and it allows serialization of most generic data without arbitrary code execution.
I don't see raw JSON providing the answer here, as I believe the question specifically mentioned pickling classes and values. I don't believe using straight JSON can serialize and deserialize python classes, while pickle can.
I use a pickle-based serialization method for almost all server-to-server communication, but always include very serious authentication mechanisms (e.g. RSA key-pair matching). However, that means I only deal with trusted sources.
If you absolutely need to work with untrusted sources, I would at the very least, try to add (much like #MartijnPieters suggests) a schema to validate your transactions. I don't think there is a good way to work with arbitrary pickled data from an untrusted source. You'd have to do something like parse the byte-string with some disassembler and then only allow trusted patterns (or block untrusted patterns). I don't know of anything that can do this for pickle.
However, if your class is "simple enough"… you might be able to use the JSONEncoder, which essentially converts your python class to something JSON can serialize… and thus validate…
How to make a class JSON serializable
The impact is, however, you have to derive your classes from JSONEncoder.

Categories

Resources