combined client and server using aiohttp - python

I am trying to find an example of a python aiohttp app that both makes http requests as a client AND processes http requests as a server.
I need to write an app that retrieves data from an internet web server AND responds to requests for the re-formatted and re-packaged data.
Could someone reference an example or provide an outline of how to structure such an app using aiohttp?
One issue I am having trouble figuring out is how to combine the loop.run_until_complete() used to start a client that is making requests and web.run_app() that is used to start a server since each of these is a blocking call.
Somehow I need to start them both.
I thought web.AppRunner() might help me start the server in a non-blocking manner, and I came across the following example in the aiohttp documentation:
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, 'localhost', 8080)
await site.start()
while True:
await asyncio.sleep(3600) # sleep forever
I thought I could use something like this to start the server and then also start the client. But I can't seem to call this from anything that isn't itself an async function (await outside async function).
I could also use some advice on how to share the data between the client process and the server process as the client need to periodically refresh the data from the internet and the server needs to be able to continuously serve up the latest version of the data. Presumably I need some sort of locking mechanism while the client is updating the data resource shared between the client process and server process?

Related

How to push notifications to Test Client in Flask-SocketIO?

I'm trying to receive pushes from the server as a client; using my test client as follows:
Client:
socket_client = socketio.test_client(app)
#socketio.on('hit_client')
def recieve_message(json_data):
print("Server has called!")
Server:
socketio.emit('hit_client', 'Hi Client!')
The server should be pushing and calling the hit_client channel, but that isn't being fired. However, the socket_client.get_received() has the emitted data. I thought the whole point of WebSockets was bidirectional communication (i.e. pushing function triggers)!
This is a very simple setup and it doesn't even seem to be working... Any help would be EXTREMELY appreciated. I've been slamming my head for hours.
The test client is not a Socket.IO client. It's only purpose is to help you write unit tests for your Socket.IO server. It is similar in concept to the Flask's test client for HTTP routes. It only makes sense to use it in unit tests.
When the server emits something to the client, the test client will just store it and make it accessible in your test code via the get_received call. It will not fire any events, since that is not its intended purpose.
If you want to implement a Socket.IO client in python, there is a package for that: https://pypi.python.org/pypi/socketIO-client. With this package, you can write a Python script that connects to the Socket.IO server and can send and receive events.

python tornado.httpclient.AsyncHTTPClient max_clients doesn't seem to work

we are using python tornado server for a new project.
the server should work like a node.js server, accepting thousands of connections and keeping them open for a long time until a response is ready.
the response is a result of a multiple http access to external resources, so of course we need to support concurrently a lot of http client connections open simultaneously as well (a few hundreds at least).
we tried to configure the AsyncHTTPClient like that:
if __name__ == "__main__":
app = make_app()
app.listen(8888)
tornado.httpclient.AsyncHTTPClient.configure("tornado.simple_httpclient.SimpleAsyncHTTPClient", max_clients=1000, defaults=dict(connect_timeout=float(10), request_timeout=float(100)))
tornado.ioloop.IOLoop.current().start()
it seems that our server is working fine but we have a problem with the httpclient - it doesn't seem to scale to more than a dozen connections and the application simply hangs until it gets a lot of timeout error (error 599).
any idea if the tornado async http client is buggy or do we use it in a wrong manner?
any ideas on a replacement technology? (python?)

Receive data on server side with django-websocket-redis?

I'm working with django-websocket-redis lib, that allow establish websockets over uwsgi in separated django loop.
By the documentation I understand well how to send data from server through websockets, but I don't understand how to receive.
Basically I have client and I want to send periodically from the client to server status. I don't understand what I need to do, to handle receiving messages from client on server side? What URL I should use on client?
You can achieve that by using periodically ajax calls from client to server. From documentation:
A client wishing to trigger events on the server side, shall use
XMLHttpRequests (Ajax), as they are much more suitable, rather than
messages sent via Websockets. The main purpose for Websockets is to
communicate asynchronously from the server to the client.
Unfortunately I was unable to find the way to achieve it using just websocket messages.

How server alone can push data to client browser using python and websockets

I'm trying to push some messages to client (browser) without any request.
For that I use WebSockets and python to do. I know WebSockets provide a full-duplex communication, but I just need server to client push. I could only find bi-directional communication examples for WebSockets over internet, for a request a response model. When I tried to send in an infinite loop from server after handshake process, the browser hung up.
the code I used is in this post
Is there any solution to do that or whether it is better to go for SSE..
Once the client initiates the web socket connection the server then is eligible to send anything to client side

Tornado/Async Webserver theory, how to handle longer running operations to utilize the async server

I have just begun to look at tornado and asynchronous web servers. In many examples for tornado, longer requests are handled by something like:
make a call to tornado webserver
tornado makes async web call to an api
let tornado keep taking requests while callback waits to be called
handle response in callback. server to user.
So for hypothetical purposes say users are making a request to tornado server at /retrive. /retrieve will make a request to an internal api myapi.com/retrieve_posts_for_user_id/ or w/e. the api request could take a second to run while getting requests, then when it finally returns tornado servers up the response. First of all is this flow the 'normal' way to use tornado? Many of the code examples online would suggest so.
Secondly, (this is where my mind is starting to get boggled) assuming that the above flow is the standard flow, should myapi.com be asyncronous? If its not async and the requests can take seconds apiece wouldn't it create the same bottleneck a blocking server would? Perhaps an example of a normal use case for tornado or any async would help to shed some light on this issue? Thank you.
Yes, as I understand your question, that is a normal use-case for Tornado.
If all requests to your Tornado server would make requests to myapi.com, and myapi.com is blocking, then yes, myapi.com would still be the bottleneck. However, if only some requests have to be handled by myapi.com, then Tornado would still be a win, as it can keep handling such requests while waiting for responses for the requests to myapi.com. But regardless, if myapi.com can't handle the load, then putting a Tornado server in front of it won't magically fix that. The difference is that your Tornado server will still be able to respond to requests even when myapi.com is busy.

Categories

Resources