How can I reduce memory usage of a Twisted server? - python

I wrote an audio broadcasting server with Python/Twisted. It works fine, but the usage of memory grows too fast! I think that's because some user's network might not be good enough to download the audio in time.
My audio server broadcast audio data to different listener's client, if some of them can't download the audio in time, that means, my server keep the audio data until listeners received. And what's more, my audio server is a broadcasting server, it receive audio data, and send them to different clients, I though Twisted copy those data in different buffer, even they are same audio piece.
I want to reduce the usage of memory usage, so I need to know when is the audio received by the client, so that I can decide when to discard some slow clients. But I have no idea how to achieve that with Twisted. Do anyone have idea?
And what else can I do to reduce usage of memory usage?
Thanks.
Victor Lin.

You didn't say, but I'm going to assume that you're using TCP. It would be hard to write a UDP-based system which had ever increasing memory because of clients who can't receive data as fast as you're trying to send it.
TCP has built-in flow control capabilities. If a receiver cannot read data as fast as you'd like to send it, this information will be made available to you and you can send more slowly. The way this works with the BSD socket API is that a send(2) call will block or will return 0 to indicate it cannot add any bytes to the send buffer. The way it works in Twisted is by a system called "producers and consumers". The gist of this system is that you register a producer with a consumer. The producer calls write on the consumer repeatedly. When the consumer cannot keep up, it calls pauseProducing on the producer. When the consumer is again ready for more data, it calls resumeProducing on the producer.
You can read about this system in more detail in the producer/consumer howto, part of Twisted's documentation.

Make sure you're using Python's garbage collector and then go through and delete variables you aren't using.

Related

Share the same gRPC server streaming iterator with many clients

We are writing an application that leverage gRPC. We implemented a bunch of Unary RPCs that clients can regularly call, getting the expected response.
Also, we have a streaming (from server to client) dedicated to continuously streams some information that are shared across all the clients (let's say some hardware local sensors information – such as room temperatures).
The Python StatusStreaming function now returns the streaming iterator and it loops constantly to stream data to the clients.
The problem is that it returns a new iterator for every client; being it updated every 100ms, that makes our system not able to serve more then a dozen clients before reaching full load. Also, it doesn't make sense to use a separate cycle for every clients, as we need to send the same exact content to all the clients.
Is it actually possible to broadcast the same streaming content to many clients without having different connection/channel for each of them?
Some background
We use gRPC-web to implement the RPCs on browsers clients, so bidi streaming is not possible (I don't have an idea how to use it anyhow, but it wouldn't be availble).
Versions uses are those tied to grpc-web (grpc version grpc # d8772cf and protobuf at 3.7.0).
Our server, is limited in resources, as we are on a small embedded server (1 core, 1gb ram).
Here is the streamer
# Inside a method of the service class here
def _Status(self):
system_controller = self._system_status()
return protos.Stream(temperature_controller)
# Real Python streaming method
def Status(self, request, context):
def streaming_iterator():
while(context.is_active()):
time.sleep(0.5)
yield self._Status()
return streaming_iterator()
A new connection is created (and also the dedicated streaming channel) for every client and so a new while cycle spawns generating data to send. This behaviour isn't needed, because the same information can be actually shared across all the clients in broadcasting fashion.

Efficient way of sending a large number of images from client to server

I'm working on a project where one client needs to take several snapshots from a camera (i.e. it's actually taking a short-duration video, hence a stream of frames), then send all images to a server which then performs some processing on these images and returns a result to the client.
Client and server are all running Python3 code.
The critical part is the image sending one.
Some background first, images are *640*480 jpeg* files. JPEG was chosen as a default choice, but lower quality encoding can be selected as well. They are captured in sequence by a camera. We have thus approximately ~600 frames to send. Frame size is around 110KiB.
The client consists of a Raspberry Pi 3 model B+. It sends the frames via wifi to a 5c server. Server and client both reside in the same LAN for the prototype version. But future deployments might be different, both in term of connectivity medium (wired or wireless) and area (LAN or metro).
I've implemented several solutions for this:
Using Python sockets on the server and the client: I'm either sending one frame directly after one frame capture, or I'm sending all images in sequence after the whole stream capture is done.
Using Gstreamer: I'm launching a GStreamer endpoint on my client and directly send the frames to the server as I stream. I'm capturing the stream on the server side with OpenCV compiled with GStreamer support, then save them to the disk.
Now, the issue I'm facing is that even if both solutions work 'well' (they get the 'final' job done, which is to send data to a server and receive a result based on some remote processing), I'm convinced there is a better way to send a large amount of data to a server, using either the Python socket library, or any other available tools.
All personal researches are done on that matter lead me to solutions similar to mine, using Python sockets, or were out of context (relying on other backends than pure Python).
By a better way, I assume:
A solution that saves bandwidth as much as possible.
A solution that sends all data as fast as possible.
For 1. I slightly modified my first solution to archive and compress all captured frames in a .tgz file that I send over to the server. It indeed decreases the bandwidth usage but also increases the time spent on both ends (due to the un/compression processes). It's obviously particularly true when the dataset is large.
For 2. GStreamer allowed me to have a negligible delay between the capture and the reception on my server. I have however no compression at all and for the reasons stated above, I cannot really use this library for further development.
How can I send a large number of images from one client to one server with minimal bandwidth usage and delay in Python?
If you want to transfer images as frames you can use some existing apps like MJPEG-Streamer which encode images from a webcam interface to JPG which reduces the image size. But if you need a more robust transfer with advanced encoding you can use some Linux tools like FFMPEG with streaming which is documented in here.
If you want lower implementation and control the whole stream by your code for modifications you can use web-based frameworks like Flask and transfer your images directly throw HTTP protocol. You can find a good example in here.
If you don't want to stream you can convert a whole set of images to a video encoded format like h264 and then transfer bytes throw the network. You can use opencv to do this.
There are also some good libraries written in python like pyffmpeg.
you can restream camera using ffmpeg over network so that client can read it both ways. it will reduce delays.

Streaming data to clients

I have a program that sniffs network data and stores it in a database using pcapy (based on this). I need to make the data available in realtime over a network connection.
Right now when i run the program it will start a second thread for the sniffer and a Twisted server on the main thread, however i have no idea how to get clients to 'tap into' the sniffer that's running in the background.
The end result should be that a client enters an url and the connection will be kept open until the client disconnects (even when there's nothing to send), whenever the server has network activity the sniffer will sniff it and send it to the clients.
I'm a beginner with Python so i'm quite overwhelmed so if anyone could point me in the right direction it would be greatly appreciated.
Without more information (a simple code sample that doesn't work as you expect, perhaps) it's tough to give a thorough answer.
However, here are two pointers which may help you:
Twisted Pair, a (unfortunately very rudimentary and poorly documented) low-level/raw sockets networking library within Twisted itself, which may be able to implement the packet capture directly in a Twisted-friendly way, or
The recently-released Crochet, which will allow you to manage the background Twisted thread and its interactions with your pcapy-based capture code.

Socket queue (Twitter streaming as a reference)

I just found out Twitter streaming endpoints support detection of slow connections somehow.
Reference: https://dev.twitter.com/docs/streaming-apis/parameters#stall_warnings (and bottom of page)
Idea is that socket send will probably process data one by one. And it knows when one packet is received by client so it can maintain queue and always know of it's size.
It's easy when client sends some confirmation packets for each of them. But that is not the case with Twitter Streaming API - it's a one-way transfer.
My question is: how did they achieve that? I can't see a way to do it without some very low level raw socket support - but I may be forgetting something here. With some low level support we could probably get ACKs for each packets. Is that even possible? Can ACKs be somehow traced?
Any other ideas how this was done?
Any way to do this e.g. in Python? Or any other language example would be appreciated.
Or maybe I am over my head here and it simply uses to track how many bytes are not yet processed through socket.send? But isn't it a poor indication of client's connection?
I started off thinking along the same lines as you but I think the implementation is actually much easier than we both expect.
Twitter's API docs state:-
"A client reads data too slowly. Every streaming connection is backed by a queue of messages to be sent to the client. If this queue grows too large over time, the connection will be closed." - https://dev.twitter.com/docs/streaming-apis/connecting#Disconnections
Based on the above I imagine Twitter will have a thread that is pushing tweets onto a queue and a long lived http connection to a client (kept open with a while loop) that pops a message off the queue and writes the data to the http response during each loop iteration.
Now if you imagine what happens inside the while loop and you think in terms of buffers, Twitter will pop an item off the queue then write the tweet data to some kind of output buffer, that buffer will get flushed and then fill up a TCP buffer for transport to the client.
If a client is reading data slowly from its TCP buffer then the server's TCP send buffer will fill up meaning that when the server's output buffer is flushed it will block because the data cannot be written to the TCP buffer which consequently means that the while loop is not popping tweets off the queue as often (because it is being blocked when data is being flushed) causing the tweet queue to fill up.
Now you would just need a check at the beginning of each loop iteration to check whether the Tweet queue has reached some predefined threshold.

Message queue proxy in Python + Twisted

I want to implement a lightweight Message Queue proxy. It's job is to receive messages from a web application (PHP) and send them to the Message Queue server asynchronously. The reason for this proxy is that the MQ isn't always avaliable and is sometimes lagging, or even down, but I want to make sure the messages are delivered, and the web application returns immediately.
So, PHP would send the message to the MQ proxy running on the same host. That proxy would save the messages to SQLite for persistence, in case of crashes. At the same time it would send the messages from SQLite to the MQ in batches when the connection is available, and delete them from SQLite.
Now, the way I understand, there are these components in this service:
message listener (listens to the messages from PHP and writes them to a Incoming Queue)
DB flusher (reads messages from the Incoming Queue and saves them to a database; due to SQLite single-threadedness)
MQ connection handler (keeps the connection to the MQ server online by reconnecting)
message sender (collects messages from SQlite db and sends them to the MQ server, then removes them from db)
I was thinking of using Twisted for #1 (TCPServer), but I'm having problem with integrating it with other points, which aren't event-driven. Intuition tells me that each of these points should be running in a separate thread, because all are IO-bound and independent of each other, but I could easily put them in a single thread. Even though, I couldn't find any good and clear (to me) examples on how to implement this worker thread aside of Twisted's main loop.
The example I've started with is the chatserver.py, which uses service.Application and internet.TCPServer objects. If I start my own thread prior to creating TCPServer service, it runs a few times, but the it stops and never runs again. I'm not sure, why this is happening, but it's probably because I don't use threads with Twisted correctly.
Any suggestions on how to implement a separate worker thread and keep Twisted? Do you have any alternative architectures in mind?
You're basically considering writing an ad-hoc extension to your messaging server, the job of which it is to provide whatever reliability guarantees you've asked of it.
Instead, perhaps you should take the hardware where you were planning to run this new proxy and run another MQ node on it. The new node should take care of persisting and relaying messages that you deliver to it while the other nodes are overloaded or offline.
Maybe it's not the best bang for your buck to use a separate thread in Twisted to get around a blocking call, but sometimes the least evil solution is the best. Here's a link that shows you how to integrate threading into Twisted:
http://twistedmatrix.com/documents/10.1.0/core/howto/threading.html
Sometimes in a pinch easy-to-implement is faster than hours/days of research which may all turn out to be for nought.
A neat solution to this problem would be to use the Key Value store Redis. Its a high speed persistent data store, with plenty of clients - it has a php and a python client (if you want to use a timed/batch process to process messages - it saves you creating a database, and also deals with your persistence stories. It runs fine on Cywin/Windows + posix environments.
PHP Redis client is here.
Python client is here.
Both have a very clean and simple API. Redis also offers a publish/subscribe mechanism, should you need it, although it sounds like it would be of limited value if you're publishing to an inconsistent queue.

Categories

Resources