I am looking for a RPC library in Java or Python (Python is preferred) that uses TCP. It should support:
Asynchronous
Bidirectional
RPC
Some sort of event loop (with callbacks or similar)
Any recommendations? I have looked a things like bjsonrpc which seemed to be the right sort of thing however it didn't seem possible for the server to identify which connections; so if a user has identified himself/herself and a request comes in from another user to send a message to that user it doesn't expose that users connection so we can send the message.
You should definitely check out Twisted. It's an event-based Python networking framework that has an implementation of an event loop (called the "reactor") supporting select, poll, epoll, kqueue and I/O completion ports, and mediates asynchronous calls with objects called Deferreds
As for your RPC requirement, perhaps you should check out Twisted's PB library or AMP.
I'm not entirely sure what you meanאt by "Event loop", but you should check out RPyC (Python)
RPyC Project page
i'm the author of bjsonrpc. I'm sure it's possible to do what you want with it.
Some things maybe are poorly documented or maybe some examples are needed.
But, in short, Handlers can store internal states (like authenticated or not, or maybe username). From any handler you can access the "Connection" class, which has the socket itself.
Seems you want something like a chat as an example. I did something similar in the past. I'll try to add a chat example for a new release.
Internal states are explained here:
http://packages.python.org/bjsonrpc/tutorial1/index.html#stateful-server
They should be used for authentication (but no standard auth method is provided yet).
On how to reach the connection class from the handler, that isn't documented yet (sorry), but it is used sometimes in the examples inside the source code. For example, example1-server.py contains this public function:
def gettotal(self):
self._conn.notify.notify("total")
return self.value_total
BaseHandler._conn represents the connection for that user. And is exactly the same class you get when you connect:
conn = bjsonrpc.connect(host=host,port=port,handler_factory=MyHandler)
So, you can store the connections for logged users in a global variable, and later call any client method you want to.
I am involved in developing Versile Python (VPy) which provides the capabilities you are requesting. It is currently available as development releases intended primarily for testing, however you may want to check it out.
Regarding identifying users you can configure remote methods to receive a context object which enables the method to receive information about an authenticated user, using a syntax similar to this draft code.
from versile.quick import *
#doc
class MessageBox(VExternal):
"""Dispatches IM messages."""
#publish(show=True, doc=True, ctx=True)
def send_message(self, msg, ctx=None):
"""Sends a message to the message box"""
if ctx.identity is None:
raise VException('No authenticated user')
else:
# do something ...
pass
Related
I am using kubernetes-client library in python and looking at the various examples, it appears we don't need to explicitly close the client connection to the API server. Does the client connection gets terminated automatically or are the examples missing the call to close the connection? I also found the docs page for the APIs (AppsV1 for example) and the examples shown there use context manager for the calls so the connection gets disconnected automatically there but I still have questions for the scripts that don't use the context manager approach.
Kubernetes's API is HTTP-based, so you can often get away without explicitly closing a connection. If you have a short script, things should get cleaned up automatically at the end of the script and it's okay to not explicitly close things.
The specific documentation page you link to shows a safe way to do it:
with kubernetes.client.ApiClient(configuration) as api_client:
api_instance = kubernetes.client.AppsV1Api(api_client)
api_instance.create_namespaced_controller_revision(...)
The per-API-version client object is stateless if you pass in an ApiClient to its constructor, so it's safe to create these objects as needed.
The ApiClient class includes an explicit close method, so you could also do this (less safely) without the context-manager syntax:
api_client = kubernetes.client.ApiClient(configuration)
apps_client = kubernetes.client.AppsV1Api(api_client)
...
api_client.close()
The library client front-page README suggests a path that doesn't explicitly create an ApiClient. Looking at one of the generated models' code, if you don't pass an ApiClient option explicitly, a new one will be created for each API-version client object; that includes a connection pool as well. That can leak local memory and cause extra connections to the cluster, but this might not matter to you for small scripts.
I'm trying to build a Pyramid app that uses ZeroMQ to provide a very simple chat/messaging interface, but I can't seem to figure out the proper setup/workflow.
The structure seems straightforward enough to me, and in its simplest form could be described in as little as two Pyramid "views"/routes:
The client SSE "show messages" view: This view/route would remain open to the client (using Server-Sent Events on the client-side and Pyramid's response.app_iter on the server-side), listening for messages from ZeroMQ and yielding them up to the client as they are received.
The "submit a new message" view: This view/route would receive POST requests containing a single message's data, which it would then pass to ZeroMQ to be received in the SSE view and displayed to any clients listening there.
For some reason however, I have not been able to come up with the correct recipe for accomplishing this feat. Google seems to be pretty sparse when it comes to recipes for 0MQ and Pyramid, and all of my own hacking has either resulted in Python/Pyramid thread/process problems, or 0MQ never being able to send or receive any messages (which is probably related to my threading issues).
So, how does one properly build this kind of an app with Pyramid?
P.S. You may assume any version of Python/Pyramid, etc in your answers. The point is to just get something that works as described.
I made a proof of concept of exactly that a few years back.
https://github.com/antoineleclair/zmq-sse-chat
I am using mysqldb for my database currently, and I need to integrate a messaging feature that is in real-time. The chat demo that Tornado provides does not implement a database, (whereas the blog does.)
This messaging service also will also double as an email in the future (like how Facebook's message service works. The chat platform is also email.) Regardless, I would like to make sure that my current, first chat version will be able to be expanded to function as email, and overall, I need to store messages in a database.
Is something like this as simple as: for every chat message sent, query the database and display the message on the users' screen. Or, is this method prone to suffer from high server load and poor optimization? How exactly should I structure the "infrastructure" to make this work?
(I apologize for some of the inherent subjectivity in this question; however, I prefer to "measure twice, code once.")
Input, examples, and resources appreciated.
Regards.
Tornado is a single threaded non blocking server.
What this means is that if you make any blocking calls on the main thread you will eventually kill performance. You might not notice this at first because each database call might only block for 20ms. But once you are making more than 200 database calls per seconds your application will effectively be locked up.
However that's quite a few DB calls. In your case that would be 200 people hitting send on their chat message in the same second.
What you probably want to do is use a queue with a non blocking API. So Tornado receives a chat message. You put it on the queue to be saved to the database by another process, then you send the chat message back out to the other chat members.
When someone connects to a chat session you also need to send off a request to the queue for all the previous messages, when the queue responds you send those off to the newly connected user.
That's how I would approach the problem anyway.
Also see this question and answer: Any suggestion for using non-blocking MySQL api on Tornado in Python3?
Just remember, Tornado is single threaded. It's amazing. And can handle thousands of simultaneous connections. But if code in one of those connections blocks for 1 second then NOTHING else will be done for any other connection during that second.
I'm new to python so please excuse me if question doesn't make sense in advance.
We have a python messaging server which has one file server.py with main function in it. It also has a class "*server" and main defines a global instance of this class, "the_server". All other functions in same file or diff modules (in same dir) import this instance as "from main import the_server".
Now, my job is to devise a mechanism which allows us to get latest message status (number of messages etc.) from the aforementioned messaging server.
This is the dir structure:
src/ -> all .py files only one file has main
In the same directory I created another status server with main function listening for connections on a different port and I'm hoping that every time a client asks me for message status I can invoke function(s) on my messaging server which returns the expected numbers.
How can I import the global instance, "the_server" in my status server or rather is it the right way to go?
You should probably use a single server and design a protocol that supports several kinds of messages. 'send' messages get sent, 'recv' message read any existing message, 'status' messages get the server status, 'stop' messages shut it down, etc.
You might look at existing protocols such as REST, for ideas.
Unless your "status server" and "real server" are running in the same process (that is, loosely, one of them imports the other and starts it), just from main import the_server in your status server isn't going to help. That will just give you a new, completely independent instance of the_server that isn't doing anything, which you can then report status on.
There are a few obvious ways to solve the problem.
Merge the status server into the real server completely, by expanding the existing protocol to handle status-related requests, as Peter Wooster suggestions.
Merge the status server into the real server async I/O implementation, but still listening on two different ports, with different protocol handlers for each.
Merge the status server into the real server process, but with a separate async I/O implementation.
Store the status information in, e.g., a mmap or a multiprocessing.Array instead of directly in the Server object, so the status server can open the same mmap/etc. and read from it. (You might be able to put the Server object itself in shared memory, but I wouldn't recommend this even if you could make it work.)
I could make these more concrete if you explained how you're dealing with async I/O in the server today. Select (or poll/kqueue/epoll) loop? Thread per connection? Magical greenlets? Non-magical cooperative threading (like PEP 3156/tulip)? Even just "All I know is that we're using twisted/tornado/gevent/etc., so whatever that does" is enough.
I have a project that is based on Twisted used to
communicate with network devices and I am adding support for a new
vendor (Citrix NetScaler) whose API is SOAP. Unfortunately the
support for SOAP in Twisted still relies on SOAPpy, which is badly out
of date. In fact as of this question (I just checked), twisted.web.soap
itself hasn't even been updated in 21 months!
I would like to ask if anyone has any experience they would be willing
to share with utilizing Twisted's superb asynchronous transport
functionality with SUDS. It seems like plugging in a custom Twisted
transport would be a natural fit in SUDS' Client.options.transport, I'm just having
a hard time wrapping my head around it.
I did come up with a way to call the SOAP method with SUDS
asynchronously by utilizing twisted.internet.threads.deferToThread(),
but this feels like a hack to me.
Here is an example of what I've done, to give you an idea:
# netscaler is a module I wrote using suds to interface with NetScaler SOAP
# Source: http://bitbucket.org/jathanism/netscaler-api/src
import netscaler
import os
import sys
from twisted.internet import reactor, defer, threads
# netscaler.API is the class that sets up the suds.client.Client object
host = 'netscaler.local'
username = password = 'nsroot'
wsdl_url = 'file://' + os.path.join(os.getcwd(), 'NSUserAdmin.wsdl')
api = netscaler.API(host, username=username, password=password, wsdl_url=wsdl_url)
results = []
errors = []
def handleResult(result):
print '\tgot result: %s' % (result,)
results.append(result)
def handleError(err):
sys.stderr.write('\tgot failure: %s' % (err,))
errors.append(err)
# this converts the api.login() call to a Twisted thread.
# api.login() should return True and is is equivalent to:
# api.service.login(username=self.username, password=self.password)
deferred = threads.deferToThread(api.login)
deferred.addCallbacks(handleResult, handleError)
reactor.run()
This works as expected and defers return of the api.login() call until
it is complete, instead of blocking. But as I said, it doesn't feel
right.
Thanks in advance for any help, guidance, feedback, criticism,
insults, or total solutions.
Update: The only solution I've found is twisted-suds, which is a fork of Suds modified to work with Twisted.
The default interpretation of transport in the context of Twisted is probably an implementation of twisted.internet.interfaces.ITransport. At this layer, you're basically dealing with raw bytes being sent and received over a socket of some sort (UDP, TCP, and SSL being the most commonly used three). This isn't really what a SUDS/Twisted integration library is interested in. Instead, what you want is an HTTP client which SUDS can use to make the necessary requests and which presents all of the response data so that SUDS can determine what the result was. That is to say, SUDS doesn't really care about the raw bytes on the network. What it cares about is the HTTP requests and responses.
If you examine the implementation of twisted.web.soap.Proxy (the client part of the Twisted Web SOAP API), you'll see that it doesn't really do much. It's about 20 lines of code that glues SOAPpy to twisted.web.client.getPage. That is, it's hooking SOAPpy in to Twisted in just the way I described above.
Ideally, SUDS would provide some kind of API along the lines of SOAPpy.buildSOAP and SOAPpy.parseSOAPRPC (perhaps the APIs would be a bit more complicated, or accept a few more parameters - I'm not a SOAP expert, so I don't know if SOAPpy's particular APIs are missing something important - but the basic idea should be the same). Then you could write something like twisted.web.soap.Proxy based on SUDS instead. If twisted.web.client.getPage doesn't offer enough control over the requests or enough information about the responses, you could also use twisted.web.client.Agent instead, which is more recently introduced and offers much more control over the whole request/response process. But again, that's really the same idea as the current getPage-based code, just a more flexible/expressive implementation.
Having just looked at the API documentation for Client.options.transport, it sounds like a SUDS transport is basically an HTTP client. The problem with this kind of integration is that SUDS wants to send a request and then be able to immediately get the response. Since Twisted is largely based on callbacks, a Twisted-based HTTP client API can't immediately return a response to SUDS. It can only return a Deferred (or equivalent).
This is why things work better if the relationship is inverted. Instead of giving SUDS an HTTP client to play with, give SUDS and an HTTP client to a third piece of code and let it orchestrate the interactions.
It may not be impossible to have things work by creating a Twisted-based SUDS transport (aka HTTP client), though. The fact that Twisted primarily uses Deferred (aka callbacks) to expose events doesn't mean that this is the only way it can work. By using a third-party library such as greenlet, it's possible to provide a coroutine-based API, where a request for an asynchronous operation involves switching execution from one coroutine to another, and events are delivered by switching back to the original coroutine. There is a project called corotwine which can do just this. It may be possible to use this to provide SUDS with the kind of HTTP client API it wants; however, it's not guaranteed. It depends on SUDS not breaking when a context switch is suddenly inserted where previously there was none. This is a very subtle and fragile property of SUDS and can easily be changed (unintentionally, even) by the SUDS developers in a future release, so it's probably not the ideal solution, even if you can get it to work now (unless you can get cooperation from the SUDS maintainers in the form of a promise to test their code in this kind of configuration to ensure it continues to work).
As an aside, the reason Twisted Web's SOAP support is still based on SOAPpy and hasn't been modified for nearly two years is that no clear replacement for SOAPpy has ever shown up. There have been many contenders (What SOAP client libraries exist for Python, and where is the documentation for them? covers several of them). If things ever settle down, it may make sense to try to update Twisted's built-in SOAP support. Until then, I think it makes more sense to do these integration libraries separately, so they can be updated more easily and so Twisted itself doesn't end up with a big pile of different SOAP integration that no one wants (which would be worse than the current situation, where there's just one SOAP integration module that no one wants).