How to specify (import?) scheme in GraphQL in python (graphene?)? - python

shame to ask, but we have GraphQL Server on Java (https://github.com/graphql-java/graphql-spring-boot), where we specified type.graphqls scheme for our service.
On client side we have JS code based on Apollo Client library. And it does not need acess to this types file.
But days are come and I need to write some API tests. Most people in our team speaks Python very well, so I decided to make test workbench on python, but I cant't find any library that let me write queries schema-free or import my types.graphqls scheme.
How can I write tests on python for custom GraphQL server thogh?
Thanks!

Finally I found a gist with simple GraphQL client based on requests library:
import requests
def run_query(query, variables):
request = requests.post('https://dev.darkdata.finance:9000/graphql',
json={'query': query, 'variables': variables})
if request.status_code == 200:
return request.json()
else:
raise Exception("Query failed to run by returning code of {}. {}".format(request.status_code, query))
You can use it to test simple queries if you want.
Source: https://gist.github.com/gbaman/b3137e18c739e0cf98539bf4ec4366ad

Related

Package to build custom client to consume an API in Python based on a configuration file?

I am new to working with APIs in general and am writing code in python that needs to consume/interact with an API someone else has set up. I was wondering if there is any package out there that would build some sort of custom client class to interact with an API given a file outlining the API in some way (like a json or something where each available endpoint and http verb could be outlined in terms of stuff like allowed payload json schema for posts, general params allowed and their types, expected response json schema, the header key/value for a business verb, etc.). It would be helpful if I could have one master file outlining the endpoints available and then some package uses that to generate a client class we can use to consume the API as described.
In my googling most API packages I have found in python are much more focused on the generation of APIs but this isn't what I want.
Basically I believe you are looking for the built in requests package.
response = requests.get(f'{base_url}{endpoint}',
params={'foo': self.bar,
'foo_2':self.bar_2},
headers={'X-Api-Key': secret}
)
And from here, you can build you own class, pass it to a dataframe or whatever.
In the requests package is basically everything you need. Status handling, exception handling everything you need.
Please check the docs.
https://pypi.org/project/requests/

How to post request to API using only code?

I am developing a DAG to be scheduled on Apache Airflow which main porpuse will be to post survey data (on json format) to an API and then getting a response (the answers to the surveys). Since this whole process is going to be automated, every part of it has to be programmed in the DAG, so I canĀ“t use Postman or any similar app (unless there is a way to automate their usage, but I don't know if this is possible).
I was thinking of using the requests library for Python, and the function I've written for posting the json to the API looks like this:
def postFileToAPI(**context):
print('uploadFileToAPI() ------ ')
json_file = context['ti'].xcom_pull(task_ids='toJson') ## this pulls the json file from a previous task
print('--------------- Posting survey request to API')
r = requests.post('https://[request]', data = json_file)
(I haven't finished defining the http link for the request because my source data is incomplete.)
However, since this is my frst time working with APIs and the requests library, I don't know if this is enough. For example, I'm unsure if I need to provide a token from the API to perform the request.
I also don't know if there are other libraries that are better suited for this or that could be a good support.
In short: I don't know if what I'm doing will work as intended, what other information I need t provide my DAG or if there are any libraries to make my work easier.
The Python requests package that you're using is all you need, except if you're making a request that needs extra authorisation - then you should also import for example requests_jwt (then from requests_jwt import JWTAuth) if you're using JSON web tokens, or whatever relevant requests package corresponds for your authorisation style.
You make POST and GET requests and all individual requests separately.
Include the URL and data arguments as you have done and that should work!
You may also need headers and/or auth arguments to get through security,
eg for the GitLab api for a private repository you would include these extra arguments, where GITLAB_TOKEN is a GitLab web token.
```headers={'PRIVATE-TOKEN': GITLAB_TOKEN},
auth=JWTAuth(GITLAB_TOKEN)```
If you just try it it should work, if it doesn't work then test the API with curl requests directly in the Terminal, or let us know :)

REST API for SPARQL in Django (Python)?

Introduction
The challenge I bring to you today is: To implement a Real Rime REST API (GET, POST, PUT, DELETE, etc) to query and update any SPARQL endpoint using the Django REST Framework for a frontend application (I am using React) to request and use the serialized data provided by the REST API.
Please note that I'm using Django because I would like to implement Web AND Mobile applications in the future, but for now I will just implement it on a React Web application.
Specifications
The REST API should be able to:
Perform (read or update) queries to a SPARQL endpoint via HTTP requests.
Serialize the response to a JSON RDF standarized table, or an RDF Graph, depending on the HTTP response.
Store the serialized response in a Python object.
Provide an endpoint with the serialized response to a frontend application such as React).
Handle incoming requests from the frontend application, "translate" and execute as a SPARQL query.
Send back the response to the frontend application's request.
ALL OF THIS while performing all queries and updates In Real Time.
What I mean with a Real Time API:
A SPARQL query is executed from the REST API to a SPARQL endpoint via an HTTP request.
The REST API reads the HTTP response generated from the request.
The REST API serializes the response to the corresponding format.
This serialized response is stored locally in a Python object for future use.
(Note: All the triples from the SPARQL endpoint in the query now exist both in the SPARQL endpoint as well as in a Python object, and are consistent both locally and remotely.)
The triples are then (hypothetically) modified or updated (Either locally or remotely).
Now the local triples are out of synch with the remote triples.
The REST API now becomes aware of this update (maybe through Listener/Observer objects?).
The REST API then automatically synchs the triples, either through an update query request (if the changes were made locally) or by updating the Python object with the response from a query request (if the update was made remotely).
Finally, both (the SPARQL endpoint and the Python object) should share the latest updated triples and, therefore, be in synch.
Previous Attempts
I have currently been able to query a SPARQL endpoint using the SPARQLWrapper package (for executing the queries), and the RDFLib and JSON packages for serializing and instantiating Python objects from the response, like this:
import json
from rdflib import RDFS, Graph
from SPARQLWrapper import GET, JSON, JSONLD, POST, TURTLE, SPARQLWrapper
class Store(object):
def __init__(self, query_endpoint, update_endpoint=None):
self.query_endpoint = query_endpoint
self.update_endpoint = update_endpoint
self.sparql = SPARQLWrapper(query_endpoint, update_endpoint)
def graph_query(self, query: str, format=JSONLD, only_conneg=True):
results = self.query(query, format, only_conneg)
results_bytes = results.serialize(format=format)
results_json = results_bytes.decode('utf8').replace("'", '"')
data = json.loads(results_json)
return data
def query(self, query: str, format=JSON, only_conneg=True):
self.sparql.resetQuery()
self.sparql.setMethod(GET)
self.sparql.setOnlyConneg(only_conneg)
self.sparql.setQuery(query)
self.sparql.setReturnFormat(format)
return self.sparql.queryAndConvert()
def update_query(self, query: str, only_conneg=True):
self.sparql.resetQuery()
self.sparql.setMethod(POST)
self.sparql.setOnlyConneg(only_conneg)
self.sparql.setQuery(query)
self.sparql.query()
store = Store('http://www.example.com/sparql/Example')
print(store.query("""SELECT ?s WHERE {?s ?p ?o} LIMIT 1"""))
print(store.graph_query("""DESCRIBE <http://www.example.com/sparql/Example/>"""))
The Challenge
The previous code solves can already:
Perform (read or update) queries to a SPARQL endpoint via HTTP requests
Serialize the response to a JSON RDF standarized table, or an RDF Graph, depending on the HTTP response
Store the serialized response in a Python object.
But still fails to implement these other aspects:
Provide an endpoint with the serialized response to a frontend application such as React).
Handle incoming requests from the frontend application, "translate" and execute as a SPARQL query.**
Send back the response to the frontend application's request.
And last, but not least, it fails completely to implement the real time aspect of this challenge.
The Questions:
How would you implement this?
Is this really the best approach?
Can the already working code be optimized?
Is there something that already does this?
Thank you so much!
Sorry but I don't know anything much about Django so can't answer here with Django specifics.
However, I can say this: SPARQL has a specification for HTTP interactions (https://www.w3.org/TR/sparql11-protocol/) and it tells you to use sparql?query=... & sparql?update... style URIs for querying a store, so why define a new way of doing things with store.query & store.graph_query etc?
Is there a Django-specific reason?
You can already pose questions to a SPARQL Endpoint using React or whatever you want right now, just as it is.
You said what is missing is to "Provide an endpoint with the serialized response" but the SPARQL responses are this! SPARQL query response formats are defined in the spec (e.g. JSON: https://www.w3.org/TR/sparql11-results-json/) and SPARQLWrapper knows how to parse them into Python objects. Other language libraries, like rdflib.js in JavaScript also know.
See YASGUI (https://triply.cc/docs/yasgui) for a stand-alone JS SPARQL client.

Rest API programming: Requests vs urllib2 in Python 2.7 -- the API needs authentication (Error 401)

I am a beginner trying to learn REST API programming through Python 2.7 to get data from Socialcast API. From my research it looks like requests or urllib2 would work. I need to authenticate with username and id for the API. I tried using urllib2 and it gave me error 401.
Which one should I use? My goal is to produce .csv files from the data so I can visualize it. Thank you in advance.
The question will yield a bit of an opinion based response, but I would suggest using Requests. I find that when making request that require parameters using Requests is easier to manage. An example for the Socialcast using Requests would be
parameters={"email" : emailAddress, "passoword" : password}
r = requests.post(postUrl, parameters)
The post url would be the url to make the post request and emailAddress and password would be the vales you use to login in.
For the csv, take a look here which includes a tutorial on going from json to csv.

Testing Flask REST server

I have a tiny Flask server that is supposed to load data from a file and run a function on it. This function will return a DataFrame and I return the json version of it. Much to my surprise this all works nicely. However, how would I test this? I have included some attempts below but I don't understand Flask (nor REST) well enough yet:
#!/home/thomas/python
from flask import Flask
from flask.ext.restful import Resource, Api
app = Flask(__name__)
api = Api(app)
class UniverseAPI(Resource):
def get(self):
import pandas as pd
frame = pd.read_csv("//datasrv10//data$//AQ//test.csv", index_col=0, header=0)
return frame.to_json()
api.add_resource(UniverseAPI, '/data/universe')
I am happy to include a few of my attempts here... I appreciate any hints. I have read the official documentation.
I should specify what I mean with testing. I can run this on my linux server and can extract all the required information with the requests package. However, I want to create a unittest that comes without the need to start the server on the localhost. I think I have managed with the FLASK test-client. However, the problem now is that the requests response object and the flask response object treat the underlying json strings rather differently. So I guess my problem is more related to json string issues rather than FLASK. Thanks for all your helpful feedback though
Well, the basics of writing a REST API are essentially a set of design principles. My understanding of it is based on this article by Miguel Grinberg, http://blog.miguelgrinberg.com/post/designing-a-restful-api-with-python-and-flask .
In it, he talks about how a REST API is:
"Stateless" - All interactions with the service can happen using the information from one request.
Built upon accessing "resources" from URIs using HTTP requests like GET, PUTS, and POST. A resource could be an order in a store, a task in a web app, or whatever you like.
There's also a bunch of stuff about how the server should standardize all forms of communication between itself and the client, indicate whether it can do cacheing, and other stuff like that. From an initial design standpoint, though, this is "the point" as he put it:
"The task of designing a web service or API that adheres to the REST guidelines then becomes > an exercise in identifying the resources that will be exposed and how they will be affected > by the different request methods."
If you're looking for an interesting example of a REST API that might be suited to your interests (I know it is to mine), reddit's is open source. It's a relatable example to see how they try and structure the interactions behind requests: http://www.reddit.com/dev/api

Categories

Resources