I started studying / writing Twisted network programming and I came across with the following code:
def handle_REGISTER(self, name):
if name in self.factory.users:
self.sendLine("Name taken, please choose another.")
return
self.sendLine("Welcome, %s!" % (name,))
self.broadcastMessage("%s has joined the channel." % (name,))
self.name = name
self.factory.users[name] = self
self.state = "CHAT"
def handle_CHAT(self, message):
message = "<%s> %s" % (self.name, message)
self.broadcastMessage(message)
def broadcastMessage(self, message):
for name, protocol in self.factory.users.iteritems():
if protocol != self:
protocol.sendLine(message)
what the benefits from self.x[y]=self?
self.factory.users is a shared mapping; each and every instance of this class can access it. It is a central registry of connection instances, if you will. The connection itself is made responsible for registering itself.
By storing references to all the per-user instances in self.factory.users you can then send messages to all users, in the broadcastMessage method:
for name, protocol in self.factory.users.iteritems():
if protocol != self:
protocol.sendLine(message)
This loops over all registered instances, and calls sendLine() on each and every other connection.
The code uses the self-reference in two ways:
To determine if a name in the chatroom is already taken
To send everyone else a message (i.e. to prevent to send a copy of the message to the user who wrote it).
To achieve #2, they iterate over all items in the dict self.factory.users. The keys are users in the chatroom. Values are instances of the chat.
When protocol != self, then the code has found an instance which doesn't belong to the current user.
Related
I just jump into websocket programing with basic knowledge of "Asynchronous" and "Threads", i have something like this
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
import socket
import uuid
import json
import datetime
class WSHandler(tornado.websocket.WebSocketHandler):
clients = []
def open(self):
self.id = str(uuid.uuid4())
self.user_info = self.request.remote_ip +' - '+ self.id
print (f'[{self.user_info}] Conectado')
client = {"sess": self, "id" : self.id}
self.clients.append(client.copy())
def on_message(self, message):
print (f'[{self.user_info}] Mensaje Recivido: {message}')
print (f'[{self.user_info}] Respuesta al Cliente: {message[::-1]}')
self.write_message(message[::-1])
self.comm(message)
def on_close(self):
print (f'[{self.user_info}] Desconectado')
for x in self.clients:
if x["id"] == self.id :
self.clients.remove(x)
def check_origin(self, origin):
return True
application = tornado.web.Application([
(r'/', WSHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(80)
myIP = socket.gethostbyname(socket.gethostname())
print ('*** Websocket Server Started at %s***' % myIP)
tornado.ioloop.IOLoop.instance().start()
my question is where do I add code ?, should I add everything inside the WShandler class, or outside, or in another file ? and when to use #classmethod?. for now there is no problem with the code when i add code inside the handler but i have just few test clients.
maybe not the full solution but just a few thoughts..
You can maybe look at the tornado websocket chat example,
here.
First good change is, that their clients (waiters) is a set()
which makes sure that every client is only contained once by default. And it is defined and accessed as a class variable. So you don't use self.waiters but cls.waiters or ClassName.waiters (in this case ChatSocketHandler.waiters) to access it.
class ChatSocketHandler(tornado.websocket.WebSocketHandler):
waiters = set()
Second change is that they update every client (you could choose here
to send the update not to all but only some) as a #classmethod, since
they dont want to receive the instance (self) but the class (cls) and
refer to the class variables (in their case waiters, cache and cach_size)
We can forget about the cache and cache size here.
So like this:
#classmethod
def send_updates(cls, chat):
logging.info("sending message to %d waiters", len(cls.waiters))
for waiter in cls.waiters:
try:
waiter.write_message(chat)
except:
logging.error("Error sending message", exc_info=True)
On every API call a new instance of your handler will be created, refered to as self. And every parameter in self is really unique to the instance and related to the actual client, calling your methods. This is good to identify a client on each call.
So a instance based client list like (self.clients) would always be empty on each call. And adding a client would only add it to this instance's view of the world.
But sometimes you want to have some variables like the list of clients the same for all instances created from your class.
This is where class variables (the ones you define directly under the class definition) and the #classmethod decorator come into play.
#classmethod makes the method call independant from the a instance. This means that you can only access class variables in those methods. But in the case of a
message broker this is pretty much what we want:
add clients to the class variable which is the same for all instances of your handler. And since it is defined as a set, each client is unique.
when receiving messages, send them out to all (or a subset of clients)
so on_message is a "normal" instance method but it calls something like: send_updates() which is a #classmethod in the end.
send_updates() iterates over all (or a subset) of clients (waiters) and uses this to send the actual updates in the end.
From the example:
#classmethod
def send_updates(cls, chat):
logging.info("sending message to %d waiters", len(cls.waiters))
for waiter in cls.waiters:
try:
waiter.write_message(chat)
except:
logging.error("Error sending message", exc_info=True)
Remember that you added waiters with waiters.append(self) so every waiter is really an instance and you are "simply" calling the instances (the instance is representing a caller) write_message() method. So this is not broadcasted but send to every caller one by one. This would be the place where you can separate by some criteria like topics or groups ...
So in short: use #classmethod for methods that are independant from a specific instance (like caller or client in your case) and you want to make actions for "all" or a subset of "all" of your clients. But you can only access class variables in those methods. Which should be fine since it's their purpose ;)
I am trying to write my custom Transaction processor. I am writing for simple Account class
class Account:
def __init__(self, name, ac_number, balance):
self.name = name
self.ac_number = ac_number
self.balance = balance
My TP is working fine for a single account. Now I want to improve it for multiple accounts. To get a different state for each account number I have changed _'_get_account_address_' function. I am following #danintel 's Cookiejar and XO_python projects. I am following xo code to get the address.
AC_NAMESPACE = hashlib.sha512('account'.encode("utf-8")).hexdigest()[0:6]
def _make_account_address(name):
return AC_NAMESPACE + \
hashlib.sha512(name.encode('utf-8')).hexdigest()[:64]
_get_account_address is working fine but _make_account_address showing error in cli
Tried to set unauthorized address
My state code is
import logging
import hashlib
from sawtooth_sdk.processor.exceptions import InternalError
LOGGER = logging.getLogger(__name__)
FAMILY_NAME = "account"
# TF Prefix is first 6 characters of SHA-512("cookiejar"), a4d219
AC_NAMESPACE = hashlib.sha512('account'.encode("utf-8")).hexdigest()[0:6]
def _make_account_address(name):
return AC_NAMESPACE + \
hashlib.sha512(name.encode('utf-8')).hexdigest()[:64]
def _hash(data):
'''Compute the SHA-512 hash and return the result as hex characters.'''
return hashlib.sha512(data).hexdigest()
def _get_account_address(from_key):
'''
Return the address of a cookiejar object from the cookiejar TF.
The address is the first 6 hex characters from the hash SHA-512(TF name),
plus the result of the hash SHA-512(cookiejar public key).
'''
return _hash(FAMILY_NAME.encode('utf-8'))[0:6] + \
_hash(from_key.encode('utf-8'))[0:64]
class Account:
def __init__(self, name, ac_number, balance):
self.name = name
self.ac_number = ac_number
self.balance = balance
class AccountState:
def __init__(self, context):
self._context = context
def make_account(self, account_obj, from_key):
'''Bake (add) "amount" cookies.'''
account_address = _make_account_address(account_obj.name) # not working
#account_address = _get_account_address(from_key) # working fine
LOGGER.info('Got the key %s and the account address %s.',
from_key, account_address)
state_str = ",".join([str(account_obj.name), str(account_obj.ac_number), str(account_obj.balance)])
state_data = state_str.encode('utf-8')
addresses = self._context.set_state({account_address: state_data})
if len(addresses) < 1:
raise InternalError("State Error")
This probably has been answered already, but I've lesser credits to add a comment.
The error you see "Tried to set unauthorized address: " is because client did not include these addresses in TransactionHeader's "outputs" addresses field.
It is possible for client to give prefix instead of complete address in "outputs" addresses field, but make use of this feature cautiously because it'll impact parallel transaction scheduling.
Please refer to https://sawtooth.hyperledger.org/docs/core/nightly/master/architecture/transactions_and_batches.html#dependencies-and-input-output-addresses for detailed understanding on different fields when composing TransactionHeader.
It means a the transaction processor tried to set (put) a value not in the list of outputs. This occurs when a client submits a transaction with an inaccurate list of inputs/outputs.
Make sure the Sawtooth address is the correct length--the address is 70 hex characters, which represent a 35 byte address (including the 6 hex character or 3 byte Transaction Family prefix).
Also, you can set the outputs list to empty--that will allow all addresses to be written (at the expense of security and efficiency). It is better to set the inputs and outputs to the state addresses you are changing--that allows transactions to be ran parallel (if you run sawtooth-validator --scheduler parallel -vv ) and is more safe and secure as the transaction processor cannot write to state addresses outside the list.
I had this issue as well. I realized that I had different prefixs to my address. Make sure they match!!
I'm writing a python module (experimental use case) which gives users the ability to send messages to another computer. The computer we want to send the message to has an unknown address which changes at arbitrary times. There is another computer (intermediary) which provides the address and time until the address expires.
For simplicity sake, let's call the one sending the message computer-A, the one receiving computer-B and computer-C is the intermediary we need to contact for computer-B's address.
What I'm trying to accomplish:
I want to be able to defer the process of waiting for the expiration time to be over to asyncio.sleep(). When the time expires, I would expect the process to get back control of the event loop and run a function to update the address.
The problem i'm struggling with is how do i implement this within a class where I cannot invoke run_until_complete/run_forever (or am i blatantly incorrect). How do you implement such a thing using the asyncio framework?
Example hypothetical code?:
from some_random_messaging_service import deliver_msg
INTERMEDIARY_COMPUTER_C_ADDRESS = "some.random.address"
class CustomMessagingSystem:
def __init__(self, computer=None):
"""Constructor
:param computer: computer name
"""
self.addresses = {}
if computer:
self.get_address(computer)
def get_address(self, computer):
"""Gets address from computer-C (intermediary)
:param computer: computer name
"""
self.addresses[computer] = self.find_address(INTERMEDIARY_COMPUTER_C_ADDRESS, computer)
await self.update_address(computer, self.expiration_time(computer))
def expiration_time(self, computer):
"""
:param computer: computer name
:return: address expiration time in seconds
"""
return self.addresses[computer][1]
def address(self, computer):
"""
:param computer: computer name
:return: computer address
"""
return self.addresses[computer][0]
async def update_address(self, computer, expiration_time):
"""updates address of computer after given expiration_time
:param computer: computer name
:expiration_time: expiration time in seconds
"""
asyncio.sleep(expiration_time)
self.get_address(computer)
def send_message(self, computer, message):
"""Sends message to target computer
:param computer: computer name
:param message: UTF-8 message
"""
deliver_msg(self.address(computer), message)
A way to accomplish this is to have your class schedule a task on the event loop using create_task.
This can be done before or after the event loop has been started.
As you want to have a separate timer for each address, it would be simplest to have 1 task per address;
we can keep these in a dictionary alongside the addresses:
class CustomMessagingSystem:
def __init__(self, computer=None, ioloop=asyncio.get_event_loop()):
self.addresses = {}
self.updaters = {}
self.ioloop = ioloop
if computer:
self._add_address(computer)
def get_address(self, computer):
try:
return self.addresses[computer]
except KeyError:
self._add_address(computer)
return self.addresses[computer]
def _add_address(self, computer):
address = self.find_address(INTERMEDIARY_COMPUTER_C_ADDRESS, computer)
task = self.ioloop.create_task(self._update_address, computer)
self.updaters[computer] = task
self.addresses[computer] = address
async def _update_address(self, computer):
while True:
addr, expiration_time = self.find_address(INTERMEDIARY_COMPUTER_C_ADDRESS, computer)
self.addresses[addr] = addr
await asyncio.sleep(expiration_time)
def send_message(self, computer, message):
deliver_msg(self.get_address(computer), message)
Naturally, if the event loop is never started, then the updating will never happen.
Finally, something that you'll want to do is to control the lifetime of these updater tasks.
I didn't implement this in the above example to keep it short.
The standard approach is to make your class into a context manager, and get __exit__
to cancel all the updaters.
The general flow of an asyncio program is something like this.
async def long_io_bound_operation():
await <call some async function that does your work>
...
def main():
asyncio.get_event_loop().run_until_complete(long_io_bound_operation())
There are other ways to actually wait on the coroutine returned by the function call long_io_bound_operation() depending on what you want, but this is the main form of it. Read up on the asyncio module for the gritty details, but the gist of it is that every time you use the await keyword, the python runtime can elect to do a non-blocking wait for the result rather than blocking and spinning waiting on some work to be done.
It's a little unclear to me from your code exactly what protocol you plan to invoke for this communication, but it's a really good bet that there is already an asyncio-compliant wrapper around that protocol written for you to leverage. aiohttp is an async wrapper for http request, for example.
If you give more details about the protocol you're using, then you'll probably get more specific advice for your problem. Hope this general summary is useful, though.
I want to use SleekXMPP and automatically accept all chat room invites that are sent to me. I know that the xep_0045 plugin can detect when I receive an invite, as I am notified in the debugger. I am still pretty new to Python and any help would be appreciated.
So far, I've found a function called handle_groupchat_invite in the xep_0045 plugin. Specifically, this code:
def plugin_init(self):
#...
self.xmpp.registerHandler(Callback('MUCInvite', MatchXMLMask("<message xmlns='%s'><x xmlns='http://jabber.org/protocol/muc#user'><invite></invite></x></message>" % self.xmpp.default_ns), self.handle_groupchat_invite))
#...
def handle_groupchat_invite(self, inv):
""" Handle an invite into a muc.
"""
logging.debug("MUC invite to %s from %s: %s", inv['from'], inv["from"], inv)
if inv['from'].bare not in self.rooms.keys():
self.xmpp.event("groupchat_invite", inv)
So I see this method at work as I see the "MUC invite to..." message in the Terminal log. From there, I would expect that I need to use self.plugin['xep_0045'].joinMUC() to join the chat room's URL (given by inv["from"]). However, I am not exactly sure where I should call this code in my script.
Thanks again for the help.
Update: I've also tried using add_event_handler in the __init__ function. Specifically my code is:
def __init__(self, jid, password, room, nick):
sleekxmpp.ClientXMPP.__init__(self, jid, password)
self.room = room
self.nick = nick
# The session_start event will be triggered when
# the bot establishes its connection with the server
# and the XML streams are ready for use. We want to
# listen for this event so that we we can initialize
# our roster.
self.add_event_handler("session_start", self.start)
# The groupchat_message event is triggered whenever a message
# stanza is received from any chat room. If you also also
# register a handler for the 'message' event, MUC messages
# will be processed by both handlers.
self.add_event_handler("groupchat_message", self.muc_message)
# The groupchat_presence event is triggered whenever a
# presence stanza is received from any chat room, including
# any presences you send yourself. To limit event handling
# to a single room, use the events muc::room#server::presence,
# muc::room#server::got_online, or muc::room#server::got_offline.
self.add_event_handler("muc::%s::got_online" % self.room,
self.muc_online)
self.add_event_hander("groupchat_invite", self.sent_invite)
From there, I created the sent_invite function, code is here:
def sent_invite(self, inv):
self.plugin['xep_0045'].joinMUC(inv["from"], self.nick, wait=True)
However, I get the following error when I do this:
File "muc.py", line 66, in init
self.add_event_hander("groupchat_invite", self.sent_invite) AttributeError: 'MUCBot' object has no attribute 'add_event_hander'
Yet in the xep_0045 plugin I see this code: self.xmpp.event("groupchat_invite", inv). According to the Event Handlers SleekXMPP wiki page,
Stream events arise whenever particular stanzas are received from the XML stream. Triggered events are created whenever xmpp.event(name, data) is called (where xmpp is a SleekXMPP object).
Can someone please explain why I am getting the error? I've also tried using
self.add_event_hander("muc::groupchat_invite", self.sent_invite)
but also without success.
I just downloaded SleekXMPP from git and add groupchat_invite handler like this and it works:
diff --git a/examples/muc.py b/examples/muc.py
index 5b5c764..e327fac 100755
--- a/examples/muc.py
+++ b/examples/muc.py
## -61,7 +61,10 ## class MUCBot(sleekxmpp.ClientXMPP):
# muc::room#server::got_online, or muc::room#server::got_offline.
self.add_event_handler("muc::%s::got_online" % self.room,
self.muc_online)
-
+ self.add_event_handler("groupchat_invite", self.accept_invite)
+
+ def accept_invite(self, inv):
+ print("Invite from %s to %s" %(inv["from"], inv["to"]))
def start(self, event):
"""
I'm writing a chat feature (like the Faceboook.com one) for a Google App Engine site. I need a way to keep track of what users have new messages. I'm currently trying to use Memcache:
class Message():
def __init__(self, from_user_key, message_text)
self.from_user_key = from_user_key
self.message_text = message_text
class NewMessages():
def __init__(self):
self.messages = []
def add_message(self, message):
self.messages.append(message)
def get_messages(self):
return self.messages
def messages_sent(self):
self.messages = [] #Clear all messages
def ChatUserManager():
def load(user_key):
manager = memcache.get("chat_user_%s" % user_key)
if manager is not None:
return manager
else:
manager = ChatUserManager(user_key)
memcache.set("chat_user_%s" % user_key, manager)
return manager
def save(self):
memcache.set("chat_user_%s" % user_key, self)
def __init__(self, user_key):
self.online = True
self.new_messages = NewMessages()
self.new_data = False
self.user_key = user_key
def recieve_message(self, message):
self.new_data = True
self.new_messages.add_message(Message(from_user_key, message_text))
def send_message(self, message):
to_manager = ChatUserManager.load(message.from_user_key)
to_manager.recieve_message(message)
def client_receive_success(self):
self.new_data = False
self.new_messages.messages_sent()
This chat is user to user, like Facebook or an IM session, not group chat.
Each user will poll a url with ajax to get new messages addressed to them every x seconds. The chat manager will be loaded on that page (ChatUserManager.load(user_key)) and new messages will be checked for. When they are sent the manager will be told that the messages have been sent (manager.client_receive_success()), and then saved back to memcache (manager.save()).
When a user sends a message in the javascript client, it will send an ajax request to a url. The url will load the client's UserChatManager and call .send_message(Message(to_user_key, message_string))).
I'm concerned about the practicality of this model. If everything is in memcache how will it be synchronized across different pages?
Is there a better way to do this?
I do admit that I'm not a python pro yet so the code might not be very pythonic, are there any best practices I'm missing?
The problem isn't so much about how to share data between "pages" but how will the usability of the service will be impacted by using memcache.
There are no guarantees associated with data persistence in memcache: one moment its there, the other it might not.