build mircoservice system by using Tornado Framework - python

Trying to build small mircoservice system by using Tornado framework.
Here is the sturcture:
-users_service
-books_service
-public_api_service
so users_service and books_service would connect to their own database like users.db and books.db (for example : books_service is running on localhost:6000, and public_api_service is running on localhost:7000), and public_api would be opned to users, so when users call public api, public_api_service would send a request to users_servcice or books_service and get their response(json format), then format them and response.
my question is how to properly send a request from public_api_service to users_service or books_service?
def get_listings_info(page_num, page_size):
url_params = {
# 'user_id': user_id,
'page_num': page_num,
'page_size': page_size
}
url = url_concat('http://127.0.0.1:6000/books', url_params)
request = HTTPRequest(url=url, method='GET')
# http_client = AsyncHTTPClient()
http_client = HTTPClient()
result = http_client.fetch(request)
result = json.loads(result.body)
# return result.body
return result
I tired this method, but got a this error: RuntimeError: Cannot run the event loop while another loop is running.
Any help would be apprecatied.

My guess is that you are trying to run this code from inside of a Tornado application and HTTPClient is meant to be standalone.
From the Tornado documentation for HTTPClient:
Applications that are running an IOLoop must use AsyncHTTPClient
instead.
This means that if you are running a Tornado application (which uses an IOLoop), the HTTPClient will not work and you should use the AsyncHTTPClient instead.
See the documentation for Tornado web clients here: https://www.tornadoweb.org/en/stable/httpclient.html

Related

When using Python/Tornado, is it possible to call another API thru http request within a handler?

I have written a set of python REST APIs that are served by the Tornado web framework. The issue I am facing is like this: When handling endpoint1 or API1, I need to get at some data that endpoint2 or API2 can provide. So, inside the handler for endpoint1, I call something like this:
class endpoint1(tornado.web.RequestHandler):
.........
def get(self):
..........
http_client = AsyncHTTPClient()
url = "http://127.0.0.1:8686/v1/endpoint2"
response = yield http_client.fetch(url)
But, the code just hangs at this point. My guess is it doesn't work since the framework is currently in the middle of servicing endpoint1 and I am trying to sneak in another request within. I am looking for suggestions on how to get this to work without using MQ or databases.
I tried using nest_asyncio also - no dice. Any help appreciated
Turns out that nest_asyncio actually does the trick. Here is a link to another thread that explains it well: RuntimeError: This event loop is already running in python
import nest_asyncio
nest_asyncio.apply()
class endpoint1(tornado.web.RequestHandler):
.........
def get(self):
..........
http_client = AsyncHTTPClient()
url = "http://127.0.0.1:8686/v1/endpoint2"
response = await http_client.fetch(url)

How to test python/flask app with third-party http lib?

I have a purest suite for my flask app that works great. However, I want to test some of my code that uses a third-party library (Qt) to send http requests. How is this possible? I see flask-testing has the live_server fixture which accomplishes this along with flask.url_for(), but it takes too much time to start up the server in the fixture.
Is there a faster way to send an http request from a third-party http lib to a flask app?
Thanks!
Turns out you can do this by manually converting the third-party request to the FlaskClient request, using a monkeypatch for whatever "send" method the third-party lib uses, then convert the flask.Response response back to a third-party reply object. All this occurs without using a TCP port.
Here is the fixture I wrote to bridge Qt http requests to the flask app:
#pytest.fixture
def qnam(qApp, client, monkeypatch):
def sendCustomRequest(request, verb, data):
# Qt -> Flask
headers = []
for name in request.rawHeaderList():
key = bytes(name).decode('utf-8')
value = bytes(request.rawHeader(name)).decode('utf-8')
headers.append((key, value))
query_string = None
if request.url().hasQuery():
query_string = request.url().query()
# method = request.attribute(QNetworkRequest.CustomVerbAttribute).decode('utf-8')
# send
response = FlaskClient.open(client,
request.url().path(),
method=verb.decode('utf-8'),
headers=headers,
data=data)
# Flask -> Qt
class NetworkReply(QNetworkReply):
def abort(self):
pass
reply = NetworkReply()
reply.setAttribute(QNetworkRequest.HttpStatusCodeAttribute, response.status_code)
for key, value in response.headers:
reply.setRawHeader(key.encode('utf-8'), value.encode('utf-8'))
reply.open(QIODevice.ReadWrite)
reply.write(response.data)
QTimer.singleShot(10, reply.finished.emit) # after return
return reply
qnam = QNetworkAccessManager.instance() # or wherever you get your instance
monkeypatch.setattr(qnam, 'sendCustomRequest', sendCustomRequest)
return ret

How to do async api requests in a GAE application?

I am working on an application which is based on GAE with python 2.7.13. What I want to do is that to make a bunch of async API calls inside a handler. Something like that:
class MakeRequests(webapp2.RequestHandler):
def post(self, *v, **kv):
*do an async api call#1*
*do an async api call#2*
*do an async api call#3*
*wait for response from all of above api requests*
*make response in a way like if call#1 failes, make it's expected*
*attributes in response as None, if call#2 succeeds add it's*
*attributes in response etc. This is just an example.*
For that purpose, I have tried libraries like asyncio, grequests, requests and simple-requests, they don't seems to be working because either they are not compatible with with GAE or with python 2.7.13.
Can anyone help me here?
Urlfetch, which is bundled by default with GAE has a way of making asynchronous calls:
from google.appengine.api import urlfetch
def post(self, *v, **kv):
rpcs = []
for url in urls:
rpc = urlfetch.create_rpc()
urlfetch.make_fetch_call(rpc, url)
rpcs.append(rpc)
results = [rpc.get_result() for rpc in rpcs]
# do stuff with results
If, for some reason you don't want to use urlfetch you can parallelize the requests manually by using threading and a synchronized Queue to read the results.

Mocking external API for testing with Python

Context
I am trying to write tests for functions that query an external API. These functions send requests to the API, get responses and process them.
In my tests, I want to simulate the external API with a Mock server that is ran locally.
So far, the mock server is ran successfully and responds to custom GET queries.
The problem
The external API responds with objects of type <class 'dict'>, while apparently all I can get from my mock server is a response of type <class 'bytes'>. The mock server is fetching pre-defined data from disk and returning them through a stream. Since I can not simulate the external API, my tests throw error messages because of the wrong types of responses.
Following are snippets of my code with some explanations.
1. setUp() function:
The setUp function is ran at the beginning of the test suite. It is responsible of configuring and running the server before running the tests:
def setUp(self):
self.factory = APIRequestFactory()
# Configuring the mock server
self.mock_server_port = get_free_port()
self.mock_server = HTTPServer(('localhost', self.mock_server_port), MockServerRequestHandler)
# Run the mock server in a separate thread
self.mock_server_thread = Thread(target=self.mock_server.serve_forever)
self.mock_server_thread.setDaemon(True)
self.mock_server_thread.start()
2. The MockServerClassHandler:
class MockServerRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
if re.search(config.SYSTEM_STATUS_PATTERN, self.path):
# Response status code
self.send_response(requests.codes.ok)
# Response headers
self.send_header("Content-Type", "application/json; charset=utf-8")
self.end_headers()
# Purge response from a file and serve it
with open('/path/to/my-json-formatted-file') as data_file:
response_content = json.dumps(json.load(data_file))
# Writing to a stream that need bytes-like input
# https://docs.python.org/2/library/basehttpserver.html
self.wfile.write(response_content.encode('utf-8'))
return
To my understanding from the official documentation, a BaseHTTPRequestHandler can only serve the content of his response through writing in a predefined stream (wfile) which needs to be given (and I am quoting an error message) a byte-like variable.
So my questions are:
Is there a way to make my mock server respond with other types of content than bytes? (JSON, python dicts ...)
Is it safe to write, in the functions that I test, a piece of code that will convert the bytes variables to Python dicts, just so I can test them with my mock server? Or is this violating some principles of testing?
Is there another way of writing a server that responds with JSON and python dicts?
In the comments, it sounds like you solved your main problem, but you're interested in learning how to mock out the web requests instead of launching a dummy web server.
Here's a tutorial on mocking web API requests, and the gory details are in the documentation. If you're using legacy Python, you can install the mock module as a separate package from PyPI.
Here's a snippet from the tutorial:
#patch('project.services.requests.get')
def test_getting_todos_when_response_is_ok(mock_get):
todos = [{
'userId': 1,
'id': 1,
'title': 'Make the bed',
'completed': False
}]
# Configure the mock to return a response with an OK status code. Also, the mock should have
# a `json()` method that returns a list of todos.
mock_get.return_value = Mock(ok=True)
mock_get.return_value.json.return_value = todos
# Call the service, which will send a request to the server.
response = get_todos()
# If the request is sent successfully, then I expect a response to be returned.
assert_list_equal(response.json(), todos)

invoking OpenWhisk actions from a Python app?

I wonder what is the easiest way to invoke an OpenWhisk action from a Python app?
Perhaps something equivalent to https://github.com/apache/incubator-openwhisk-client-js/ but in Python. I know that there used to be a Python-based CLI (https://github.com/apache/incubator-openwhisk-client-python), but I haven't found any documentation on how to reuse it from my Python script.
Invoking actions from a Python application will need you to send a HTTP request to the platform API. There is no official OpenWhisk SDK for Python.
The example code shows how to invoke the platform API using the requests library.
import subprocess
import requests
APIHOST = 'https://openwhisk.ng.bluemix.net'
AUTH_KEY = subprocess.check_output("wsk property get --auth", shell=True).split()[2]
NAMESPACE = 'whisk.system'
ACTION = 'utils/echo'
PARAMS = {'myKey':'myValue'};
BLOCKING = 'true'
RESULT = 'true'
url = APIHOST + '/api/v1/namespaces/' + NAMESPACE + '/actions/' + ACTION
user_pass = AUTH_KEY.split(':')
response = requests.post(url, json=PARAMS, params={'blocking': BLOCKING, 'result': RESULT}, auth=(user_pass[0], user_pass[1]))
print(response.text)
Swagger documentation for full API is available here.
There is an open issue to create a Python client library to make this easier.

Categories

Resources