Releasing resources when Pyro4 client disconnects unexpectedly - python

I have a Pyro4 distributed system with multiple clients connecting to a single server. These clients connect to a remote object, and that object may allocate some resources in the system (virtual devices, in my case).
Once a client disconnects (let's say because of a crash), I need to release those resources. What is the proper way to detect that an specific client has disconnected from an specific object?
I've tried different things:
Overriding the Daemon.clientDisconnected method. I get a connection parameter from this method. But I can't correlate that to an object, because I have no access to which remote object that connection refers to.
Using Pyro4.current_context in Daemon.clientDisconnected. This doesn't work because that is a thread-local object. That in place, if I have more clients connected than threads in my pool, I get repeated contexts.
Using Proxy._pyroAnnotations as in the "usersession" example available by the Pyro4 project, doesn't help me, because again, I get the annotation from the Pyro4.core.current_context.annotations attribute, which shows me wrong annotations when Daemon.clientDisconnected is called (I imagine due to a thread related issues).
Using instance_mode="session" and the __del__ method in the remote class (as each client would have a separate instance of the class, so the instance is supposed to be destroyed once the client disconnects). But this relies on the __del__ method, which has some problems as some Python programmers would point out.
I added my current solution as an answer, but I really would like to know if there's a more elegant way of doing this with Pyro4, as this scenario is a recurrent pattern in network programming.

Pyro 4.63 will probably have some built-in support for this to make it easier to do. You can read about it here http://pyro4.readthedocs.io/en/latest/tipstricks.html#automatically-freeing-resources-when-client-connection-gets-closed and try it out if you clone the current master from Github. Maybe you can take a look and see if that would make your use case simpler?

I use the Proxy._pyroHandshake attribute as a client ID in the client side and override the Daemon.validateHandshake and Daemon.clientDisconnected. This way, on every new connection I map the handshake data (unique per client) to a connection. But I really wanted to know if there's an elegant way to do that in Pyro4, which is pattern that happens very often in network programming.
Notice that instead of using the Proxy as an attribute of Client, Client can also extends Pyro4.Proxy and use _pyroAnnotations to send the client ID to all the remote calls.
class Client:
def __init__(self):
self._client_id = uuid.uuid4()
self._proxy = Pyro4.Proxy("PYRO:server#127.0.0.1")
self._proxy._pyroHandshake = self._client_id
self._proxy._pyroBind()
def allocate_resource(self, resource_name):
self._proxy.allocate_resource(self._client_id, resource_name)
class Server:
def __init__(self):
self._client_id_by_connection = {}
self._resources_by_client_id = {}
def client_connected(self, connection, client_id):
self._client_id_by_connection[client_id] = connection
self._resources_by_client_id[client_id] = []
def client_disconnected(self, connection):
client_id = self._client_id_by_connection[connection]
for resource in self._resources_by_client_id[client_id]
resource.free()
#Pyro4.expose
def allocate_resource(self, client_id, resource_name)
new_resource = Resource(resource_name)
self._resources_by_client_id[client_id].append(new_resource)
server = Server()
daemon.register(server, objectId="server")
daemon.clientDisconnect = server.client_disconnected
daemon.validateHandshake = server.client_connected
daemon.requestLoop()

Related

Managing Connections in an Azure Serverless Function App

Microsoft recommends you maintain a single instance of CosmosClient across your whole application, and I'm trying to achieve this in my Function App (with more than just CosmosClient). However, even when re-using both database & container proxies, I always see a warning that I have hit the maximum (10) number of connections to Cosmos and that it's discarding the connection when I send through enough requests.
For context, it's a serverless Python Function App triggered by a message queue, the connections are managed in shared code in a helper function. I have to use the Cosmos SDK because I have to both read and update Cosmos doc.
Has anyone successfully navigated this in the past? would it simply be best practice to instantiate a new connection for every single function call? I tried creating a new CosmosClients when receiving burst traffic, but proved very difficult to do efficiently.
Here's an example of the class I'm using to manage connections:
COSMOS_CLIENT = None
class Client:
def __init__(self):
self.cosmos_client: CosmosClient = self._get_global_cosmos_client()
def _get_global_cosmos_client(self) -> CosmosClient:
global COSMOS_CLIENT
if COSMOS_CLIENT is None:
logging.info('[COSMOS] NEW CLIENT CONNECTION')
COSMOS_CLIENT = CosmosClient.from_connection_string(COSMOS_DB_CONNECTION_STRING
return COSMOS_CLIENT
Conceptually, because you are creating the client based on ConnectionString (there is always 1) this code should always create 1 client.
The number connections is not the number of clients.
Do not create multiple clients, always create 1 client for each account you are interacting against. That single client can perform operations on all existing databases/containers in the account.
Creating multiple clients just creates a problem, because each client will maintain its own independent connections and not reuse them and it will create a higher number of connections than reusing the single client, eventually leading to SNAT port exhaustion.
The error message: Connection pool is full, discarding connection: is not generated by the Cosmos Client directly, rather from the underlying urllib3.connectionpool. See: https://github.com/Azure/azure-sdk-for-python/issues/12102
The CosmosClient supports passing the session through transport, https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/core/azure-core/CLIENT_LIBRARY_DEVELOPER.md#transport, -> https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cosmos/azure-cosmos/azure/cosmos/_cosmos_client_connection.py#L198.
Reference: https://github.com/Azure/azure-sdk-for-python/issues/12102#issuecomment-645641481

Python & HTTPX: How does httpx client's connection pooling work?

Consider this function that makes a simple GET request to an API endpoint:
import httpx
def check_status_without_session(url : str) -> int:
response = httpx.get(url)
return response.status_code
Running this function will open a new TCP connection every time the function check_status_without_session is called. Now, this section of HTTPX documentation recommends using the Client API while making multiple requests to the same URL. The following function does that:
import httpx
def check_status_with_session(url: str) -> int:
with httpx.Client() as client:
response = client.get(url)
return response.status_code
According to the docs using Client will ensure that:
... a Client instance uses HTTP connection pooling. This means that when you make several requests to the same host, the Client will reuse the underlying TCP connection, instead of recreating one for every single request.
My question is, in the second case, I have wrapped the Client context manager in a function. If I call check_status_with_session multiple times with the same URL, wouldn't that just create a new pool of connections each time the function is called? This implies it's not actually reusing the connections. As the function stack gets destroyed after the execution of the function, the Client object should be destroyed as well, right? Is there any advantage in doing it like this or is there a better way?
Is there any advantage in doing it like this or is there a better way?
No, there is no advantage using httpx.Client in the way you've shown. In fact the httpx.<method> API, e.g. httpx.get, does exactly the same thing!
The "pool" is a feature of the transport manager held by Client, which is HTTPTransport by default. The transport is created at Client initialisation time and stored as the instance property self._transport.
Creating a new Client instance means a new HTTPTransport instance, and transport instances have their own TCP connection pool. By creating a new Client instance each time and using it only once, you get no benefit over using e.g. httpx.get directly.
And that might be OK! Connection pooling is an optimisation over creating a new TCP connection for each request. Your application may not need that optimisation, it may be performant enough already for your needs.
If you are making many requests to the same endpoint in a tight loop, iterating within the context of the loop may net you some throughput gains, e.g.
with httpx.Client(base_url="https://example.com") as client:
results = [client.get(f"/api/resource/{idx}") for idx in range(100)]
For such I/O-heavy workloads you may do even better by executing results in parallel, e.g. using httpx.AsyncClient.

Is there a python equivalent for .isConnected functionality in C#

I have been looking for the equivalent of [isConnected() functionality] of C# in PYTHON. https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.socket.connected?view=net-5.0
What I basically need is to check if the socket connection is still open on another end (primary server). If not, move the connection to a new socket(backup server).
I have been quite blank as to how I can move the existing connection to another server without having to reconnect again. Any kind of guidance and help will be really appreciated.
Right now, my connection to server is done at Login(). I want that just in case primary disconnects, the existing username moves to secondary. He just should perform file and word operations. What changes should I do in my code to achieve this.
My current code structure is:
Client side:
def file_operation():
do_something
def word_addition():
do_something
def login():
s.connect(host,port)
if __name__ == "__main__":
Server side:
def accept_client():
s.accept
do_something
def GUI():
accept_client()
if __name__ == "__main__":
According to the Python Sockets documentation, no, there isn't. Python sockets are merely a wrapper around the Berkeley Sockets API, so whatever is in that API, that's what's in the Python API.
If you look at the source code for Microsoft's Socket.cs class, you'll see that they maintain a boolean flag field m_IsConnected to indicate the socket's current connection status. You could potentially do the same with your own custom sockets class, using Microsoft's code as a model for writing your Python code.
Or, you could simply use the socket as you normally would, and switch to a different server when a socket error occurs.

Closing client connection to kubernetes API server in python client

I am using kubernetes-client library in python and looking at the various examples, it appears we don't need to explicitly close the client connection to the API server. Does the client connection gets terminated automatically or are the examples missing the call to close the connection? I also found the docs page for the APIs (AppsV1 for example) and the examples shown there use context manager for the calls so the connection gets disconnected automatically there but I still have questions for the scripts that don't use the context manager approach.
Kubernetes's API is HTTP-based, so you can often get away without explicitly closing a connection. If you have a short script, things should get cleaned up automatically at the end of the script and it's okay to not explicitly close things.
The specific documentation page you link to shows a safe way to do it:
with kubernetes.client.ApiClient(configuration) as api_client:
api_instance = kubernetes.client.AppsV1Api(api_client)
api_instance.create_namespaced_controller_revision(...)
The per-API-version client object is stateless if you pass in an ApiClient to its constructor, so it's safe to create these objects as needed.
The ApiClient class includes an explicit close method, so you could also do this (less safely) without the context-manager syntax:
api_client = kubernetes.client.ApiClient(configuration)
apps_client = kubernetes.client.AppsV1Api(api_client)
...
api_client.close()
The library client front-page README suggests a path that doesn't explicitly create an ApiClient. Looking at one of the generated models' code, if you don't pass an ApiClient option explicitly, a new one will be created for each API-version client object; that includes a connection pool as well. That can leak local memory and cause extra connections to the cluster, but this might not matter to you for small scripts.

Accessing Sockets with Python SocketServer.ThreadingTCPServer

I'm using a SocketServer.ThreadingTCPServer to serve socket connections to clients. This provides an interface where users can connect, type commands and get responses. That part I have working well.
However, in some cases I need a separate thread to broadcast a message to all connected clients. I can't figure out how to do this because there is no way to pass arguments to the class instantiated by ThreadingTCPServer. I don't know how to gather a list of socket connections that have been created.
Consider the example here. How could I access the socket created in the MyTCPHandler class from the __main__ thread?
You should not write to the same TCP socket from multiple threads. The writes may be interleaved if you do ("Hello" and "World" may become "HelWloorld").
That being said, you can create a global list to contain references to all the server objects (who would register themselves in __init__()). The question is, what to do with this list? One idea would be to use a queue or pipe to send the broadcast data to each server object, and have the server objects look in that queue for the "extra" broadcast data to send each time their handle() method is invoked.
Alternatively, you could use the Twisted networking library, which is more flexible and will let you avoid threading altogether - usually a superior alternative.
Here is what I've come up with. It isn't thread safe yet, but that shouldn't be a hard fix:
When the socket is accepted:
if not hasattr(self.server, 'socketlist'):
self.server.socketlist = dict()
thread_id = threading.current_thread().ident
self.server.socketlist[thread_id] = self.request
When the socket closes:
del self.server.socketlist[thread_id]
When I want to write to all sockets:
def broadcast(self, message):
if hasattr(self._server, 'socketlist'):
for socket in self._server.socketlist.values():
socket.sendall(message + "\r\n")
It seems to be working well and isn't as messy as I thought it might end up being.

Categories

Resources