I have been looking for the equivalent of [isConnected() functionality] of C# in PYTHON. https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.socket.connected?view=net-5.0
What I basically need is to check if the socket connection is still open on another end (primary server). If not, move the connection to a new socket(backup server).
I have been quite blank as to how I can move the existing connection to another server without having to reconnect again. Any kind of guidance and help will be really appreciated.
Right now, my connection to server is done at Login(). I want that just in case primary disconnects, the existing username moves to secondary. He just should perform file and word operations. What changes should I do in my code to achieve this.
My current code structure is:
Client side:
def file_operation():
do_something
def word_addition():
do_something
def login():
s.connect(host,port)
if __name__ == "__main__":
Server side:
def accept_client():
s.accept
do_something
def GUI():
accept_client()
if __name__ == "__main__":
According to the Python Sockets documentation, no, there isn't. Python sockets are merely a wrapper around the Berkeley Sockets API, so whatever is in that API, that's what's in the Python API.
If you look at the source code for Microsoft's Socket.cs class, you'll see that they maintain a boolean flag field m_IsConnected to indicate the socket's current connection status. You could potentially do the same with your own custom sockets class, using Microsoft's code as a model for writing your Python code.
Or, you could simply use the socket as you normally would, and switch to a different server when a socket error occurs.
Related
I want to write a test for my code which uses an FTP library and does upload data via FTP.
I would like to avoid the need for a real FTP server in my test.
What is the most simple way to test my code?
There are several edge-cases which I would like to test.
For example, my code tries to create a directory which already exists.
I want to catch the exception and do appropriate error handling.
I know that I could use the mocking library. I used it before. But maybe there is a better solution for this use case?
Update Why I don't want to do mocking: I know that I could use mocking to solve this. I could mock the library I use (I use ftputil from Stefan Schwarzer) and test my code this way. But what happens if I change my code and use a different FTP library in the future? Then I would need to re-write my testing code, too. I am lazy. I want to be able to rewrite the real code I am testing without touching the test code. But maybe I am still missing a cool way to use mocking.
Solved with https://github.com/tbz-pariv/ftpservercontext
Firstly to hey this or of the way. You aren't asking about Mocking, your question is about Faking.
Fake, an implementation of an interface, which expresses correct behaviour, but cannot be used in production.
Mock, an implementation of an interface that responds to interactions based on a scripted (script as in movie script, not uncompiled code) response.
Stub, an implementation of an interface lacking any real implementation. Usually used in mcguffin style tests.
Notice that in every case the word "interface" is used.
Your question asks how to Fake a TCP port such that the behaviour is a FTP server, with STATE of a rw filesystem underneath.
This is hard.
It is much easier to MOCK an internal interface that throws when you call the mkdir function.
If you must FAKE a FTP server. I suggest creating a docker container with the server in the state you want and use docker to handle the repeatability and lifecycle of the FTP server.
ContextManager:
class FTPServerContext(object):
banner = 'FTPServerContext ready'
def __init__(self, directory_to_serve):
self.directory_to_serve = directory_to_serve
def __enter__(self):
cmd = ['serve_directory_via_ftp']
self.pipe = subprocess.Popen(cmd, cwd=self.directory_to_serve)
time.sleep(2) # TODO check banner via https://stackoverflow.com/a/4896288/633961
def __exit__(self, *args):
self.pipe.kill()
console_script:
def serve_directory_via_ftp():
# https://pyftpdlib.readthedocs.io/en/latest/tutorial.html
authorizer = DummyAuthorizer()
authorizer.add_user('testuser-ftp', 'testuser-ftp-pwd', '.', perm='elradfmwMT')
handler = FTPHandler
handler.authorizer = authorizer
handler.banner = testutils.FTPServerContext.banner
address = ('localhost', 2121)
server = FTPServer(address, handler)
server.serve_forever()
Usage in test:
def test_execute_job_and_create_log(self):
temp_dir = tempfile.mkdtemp()
with testutils.FTPServerContext(temp_dir) as ftp_context:
execute_job_and_create_log(...)
Code is in the public domain under any license you want. It would great if you make this a pip installable package at pypi.org.
I have a Pyro4 distributed system with multiple clients connecting to a single server. These clients connect to a remote object, and that object may allocate some resources in the system (virtual devices, in my case).
Once a client disconnects (let's say because of a crash), I need to release those resources. What is the proper way to detect that an specific client has disconnected from an specific object?
I've tried different things:
Overriding the Daemon.clientDisconnected method. I get a connection parameter from this method. But I can't correlate that to an object, because I have no access to which remote object that connection refers to.
Using Pyro4.current_context in Daemon.clientDisconnected. This doesn't work because that is a thread-local object. That in place, if I have more clients connected than threads in my pool, I get repeated contexts.
Using Proxy._pyroAnnotations as in the "usersession" example available by the Pyro4 project, doesn't help me, because again, I get the annotation from the Pyro4.core.current_context.annotations attribute, which shows me wrong annotations when Daemon.clientDisconnected is called (I imagine due to a thread related issues).
Using instance_mode="session" and the __del__ method in the remote class (as each client would have a separate instance of the class, so the instance is supposed to be destroyed once the client disconnects). But this relies on the __del__ method, which has some problems as some Python programmers would point out.
I added my current solution as an answer, but I really would like to know if there's a more elegant way of doing this with Pyro4, as this scenario is a recurrent pattern in network programming.
Pyro 4.63 will probably have some built-in support for this to make it easier to do. You can read about it here http://pyro4.readthedocs.io/en/latest/tipstricks.html#automatically-freeing-resources-when-client-connection-gets-closed and try it out if you clone the current master from Github. Maybe you can take a look and see if that would make your use case simpler?
I use the Proxy._pyroHandshake attribute as a client ID in the client side and override the Daemon.validateHandshake and Daemon.clientDisconnected. This way, on every new connection I map the handshake data (unique per client) to a connection. But I really wanted to know if there's an elegant way to do that in Pyro4, which is pattern that happens very often in network programming.
Notice that instead of using the Proxy as an attribute of Client, Client can also extends Pyro4.Proxy and use _pyroAnnotations to send the client ID to all the remote calls.
class Client:
def __init__(self):
self._client_id = uuid.uuid4()
self._proxy = Pyro4.Proxy("PYRO:server#127.0.0.1")
self._proxy._pyroHandshake = self._client_id
self._proxy._pyroBind()
def allocate_resource(self, resource_name):
self._proxy.allocate_resource(self._client_id, resource_name)
class Server:
def __init__(self):
self._client_id_by_connection = {}
self._resources_by_client_id = {}
def client_connected(self, connection, client_id):
self._client_id_by_connection[client_id] = connection
self._resources_by_client_id[client_id] = []
def client_disconnected(self, connection):
client_id = self._client_id_by_connection[connection]
for resource in self._resources_by_client_id[client_id]
resource.free()
#Pyro4.expose
def allocate_resource(self, client_id, resource_name)
new_resource = Resource(resource_name)
self._resources_by_client_id[client_id].append(new_resource)
server = Server()
daemon.register(server, objectId="server")
daemon.clientDisconnect = server.client_disconnected
daemon.validateHandshake = server.client_connected
daemon.requestLoop()
I wish to use RPyC to provide an API for a hardware board as a service.
The board can only cater for a single user at a time.
Is there any way I can get RPyC to enforce that only a single user can get access at a time?
I'm not sure if this would work (or work well), but you can try starting a OneShotServer inside a loop, thus at any given moment only one connection is served. When the connection is closed, the server terminates, and you start another one for the next client.
Something like:
is_aborting = False
while not is_aborting:
server = OneShotServer(myservice, *args, **kwargs)
# serve the next client:
server.start()
# done serving the client
If this doesn't work, your best bet is to subclass ThreadedServer, and override the _accept_method method to keep track if there's already a connection open, and return an error if there is.
I am looking for a RPC library in Java or Python (Python is preferred) that uses TCP. It should support:
Asynchronous
Bidirectional
RPC
Some sort of event loop (with callbacks or similar)
Any recommendations? I have looked a things like bjsonrpc which seemed to be the right sort of thing however it didn't seem possible for the server to identify which connections; so if a user has identified himself/herself and a request comes in from another user to send a message to that user it doesn't expose that users connection so we can send the message.
You should definitely check out Twisted. It's an event-based Python networking framework that has an implementation of an event loop (called the "reactor") supporting select, poll, epoll, kqueue and I/O completion ports, and mediates asynchronous calls with objects called Deferreds
As for your RPC requirement, perhaps you should check out Twisted's PB library or AMP.
I'm not entirely sure what you meanאt by "Event loop", but you should check out RPyC (Python)
RPyC Project page
i'm the author of bjsonrpc. I'm sure it's possible to do what you want with it.
Some things maybe are poorly documented or maybe some examples are needed.
But, in short, Handlers can store internal states (like authenticated or not, or maybe username). From any handler you can access the "Connection" class, which has the socket itself.
Seems you want something like a chat as an example. I did something similar in the past. I'll try to add a chat example for a new release.
Internal states are explained here:
http://packages.python.org/bjsonrpc/tutorial1/index.html#stateful-server
They should be used for authentication (but no standard auth method is provided yet).
On how to reach the connection class from the handler, that isn't documented yet (sorry), but it is used sometimes in the examples inside the source code. For example, example1-server.py contains this public function:
def gettotal(self):
self._conn.notify.notify("total")
return self.value_total
BaseHandler._conn represents the connection for that user. And is exactly the same class you get when you connect:
conn = bjsonrpc.connect(host=host,port=port,handler_factory=MyHandler)
So, you can store the connections for logged users in a global variable, and later call any client method you want to.
I am involved in developing Versile Python (VPy) which provides the capabilities you are requesting. It is currently available as development releases intended primarily for testing, however you may want to check it out.
Regarding identifying users you can configure remote methods to receive a context object which enables the method to receive information about an authenticated user, using a syntax similar to this draft code.
from versile.quick import *
#doc
class MessageBox(VExternal):
"""Dispatches IM messages."""
#publish(show=True, doc=True, ctx=True)
def send_message(self, msg, ctx=None):
"""Sends a message to the message box"""
if ctx.identity is None:
raise VException('No authenticated user')
else:
# do something ...
pass
so I'm implementing a log server with twisted (python-loggingserver) and I added simple authentication to the server. If the authentication fails, I wanna close the connection to the client. The class in the log server code already has a function called handle_quit(). Is that the right way to close the connection? Here's a code snippet:
if password != log_password:
self._logger.warning("Authentication failed. Connection closed.")
self.handle_quit()
If the handle_quit message you're referring to is this one, then that should work fine. The only thing the method does is self.transport.loseConnection(), which closes the connection. You could also just do self.transport.loseConnection() yourself, which will accomplish the same thing (since it is, of course, the same thing). I would select between these two options by thinking about whether failed authentication should just close the connection or if it should always be treated the same way a quit command is treated. In the current code this makes no difference, but you might imagine the quit command having extra processing at some future point (cleaning up some resources or something).