Send an xmpp message to all connected clients/resources - python

How can I send one XMPP message to all connected clients/resources using a Python libraries for example:
xmpppy, jabber.py, jabberbot. Any other commandline solution is well.
So far I've only been able to send an echo or a single message to only one client.
The purpose is to send a message to all resources/clients connected, not grouped.
This might be triggered by a command but is not 'really' necessary.
Thank you.

I cannot give you a specific python example, but I explain how the logic works.
When you send a message to a bare Jid then it depends on the server software or configuration how its routed. Some servers send the message to the "most available resource", and some servers send it to all resources. E.g. Google Talk sends it to all resources.
If you control the server software and it allows you to route messages to a bare Jid to all connected resources then this would be the easiest way.
When your code must work on any server then you should collect all available resources of your contacts. You get them with the presence, most libraries have a callback for this. Then you can send out the messages to full Jids (with resources) in a loop.

I think If you set the same priorities for all connected resources, It would work but I did not try actually.
However in ejabberd there is a module named Mssage Carbon which do this for you, this feature or property is also available in open fire under the name of "route.all-resource".
Hint: If Message carbons used, XMPP client library should suport this too for making it working.

Related

how to connect the Raspberrypi to internet with Sim7020E using 1NCE IoT FLAT

hello I work with an Rpi 4 and an Rpi Zero W and I want to activate the internet connection by a communication network using SIM7020E. I manage to establish the contact by checking the basic commands on Minicom but the connection is not activated. I must provide at the end transfer data from my Rpi by Email but without Wifi or Ethernet Cable.
i program with python
please can someone help me?
I would recommend checking out the 1NCE Developer Hub.
In the recipes section are examples of using the SIM7000G, which works very much in the same way and can be compared. You find all the recipes here: https://help.1nce.com/dev-hub/recipes and look for SIM7000G.
The main things you need to do is to configure the SIM7020E via the serial interface and send the correct commands from your python application.
Do the overall network registration of the module as shown here: https://help.1nce.com/dev-hub/recipes/sim7000g-network-registration
From there, you can already start sending data. Unfortunately, there is not a direct example for sending mail but as I expect you are using some mail delivery service that also has an API, you can follow the following guide for that: https://help.1nce.com/dev-hub/recipes/sim7000g-http-post
It will show you what commands are required for an HTTP POST call.

Django Channels: How to pass incoming messages to external script which is running outside of django?

I have started a private project with Django and Channels to build a web-based UI to control the music player daemon (mpd) on raspberry pi. I know that there are other projects like Volumio or moode audio etc. out of the box that is doing the same, but my intension is to learn something new!
Up to now I have managed to setup a nginx server on the pi that communicates with my devices like mobile phone or pc. In the background nginx communicates with an uWSGI server for http requests to Django and a daphne server as asgi for ws connection to Django Channels. As well there is a redis server installed as backend because the Channels Layer needs this. So, on client request a simple html page as UI is served and a websocket connection is established so far.
In parallel I have a separate script as a mpd handler which is wrapped in a while loop to keep it alive, and which does all the stuff with mpd using the python module python-mpd2.
The mpd handler shall get its commands via websocket from the clients/consumers like play, stop etc. and reacts on that. At the same time, it shall send the timeline of the song when a song is playing, let’s say every one second as well via websocket. I could manage to send frequently data to all connected clients/consumers with async_to_sync(channel_layer.group_send) from outside but I couldn’t find a solution how to pass data/commands coming from the clients via websocket to my separate running mpd handler script.
I read in the docs for Django Channels that it is not recommended to use while loops in the consumers because this will block all the communication – that’s right I have tried this already. Then I tried to receive messages with the command async_to_sync(channel_layer.receive)('channel_name') in the mpd handler with a direct connection to a consumer. But this command blocks my mpd handler because it works async although I use async_to_sync.
So, my question:
Is it possible to pass messages to outside of Django Channels to other scripts with channel own methods? Do you have any suggestion how to solve this maybe with other methods or workarounds? I am looking for a reliable solution.
I gave thoughts to that issue and have some ideas, but I don’t know if this will lead to any solution:
Polling:
The clients send frequently messages and requests via websocket to control the mpd and update the UI. In this case no handler would be needed. (I don’t know if this method will generate to much traffic on the websocket and makes it slow. As well, the connection to mpd has to be established frequently and closed again. Don’t know if this works robust.)
Database:
Generate a database where consumers and the mpd handler have access to. The consumers write the incoming messages in a database and the mpd handler reads them out and does the job. (Here I don’t know if there will be problems when the consumers and mpd handler try to access the db at the same time.)
Using Queues with multiprocessing module:
Consumers passes the messages via a queue to the mpd handler. (Don’t know if this is possible.)
Catching up the messages in redis:
Mpd handler listens frequently on redis to catch up the messages. I read that when the Layers are used in common way the groups and channel names are listed on redis only. Messages are passed via redis when the consumers are started as workers. (That would mean that all my consumers must start as background worker, but how?)
I hope you may have a solution to my question. You may realise from my ideas and the question marks involved to solve this problem that I am not an IT expert. As I wrote at the beginning, I have another engineering background and a newbie in this but very interested to learn something new! So please be patient with me when I don’t understand everything immediately.
I hope to read your answers soon and thank you in advance.
Best regards.
Whilst nobody gave an answer to my question, I tried a little bit out some possible options.
I changed the binding of mpd from fix IP to a socket connection and created a mpd_Handler class with some functions/methods like connect to mpd, disconnect, play, pause etc.
This class is imported in Django consumers.py and views.py. Whenever a web client connects to Django or has a new command (like play, skip etc.), the mpd_Handler will perform the command and respond the actual state of mpd like current song metadata.
A second mpd handler which is running outside of Django as a separate script monitors frequently the mpd state to detect any changes. In case of a change at mpd (e.g., the song of web radio stream has changed or the duration time of the song) this handler informs all clients that are connected to Django consumer group with the command async_to_sync(channel_layer.group_send) so that the clients can update their UI.
At the moment it works, and I hope this is a good solution and helps others who have the same problem. Other suggestions are still welcome!
Best regards.

Only allow connections from custom clients

I'm writing a Socket Server in Python, and also a Socket Client to connect to the Server.
The Client interacts with the Server in a way that the Client sends information when an action is invoked, and the Server processes the information.
The problem I'm having, is that I am able to connect to my Server with Telnet, and probably other things that I haven't tried yet. I want to disable connection from these other Clients, and only allow connections from Python Clients. (Preferably my custom-made client, as it sends information to communicate)
Is there a way I could set up authentication on connection to differentiate Python Clients from others?
Currently there is no code, as this is a problem I want to be able to solve before getting my hands dirty.
When a new connection is made to your server, your protocol will have to specify some way for the client to authenticate. Ultimately there is nothing that the network infrastructure can do to determine what sort of process initiated the connection, so you will have to specify some exchange that allows the server to be sure that it really is talking to a valid client process.
#holdenweb has already given a good answer with basic info.
If a (terminal) software sends the bytes that your application expects as a valid identification, your app will never know whether it talks to an original client or anything else.
A possible way to test for valid clients could be, that your server sends an encrypted and authenticated question (should be different at each test!), e.g. something like "what is 18:37:12 (current date and time) plus 2 (random) hours?"
Encryption/Authentication would be another issue then.
If you keep this algorithm secret, only your clients can answer it and validate themselves successfully. It can be hacked/reverse engineered, but it is safe against basic attackers.

Python script that simultaneously listens/responds to HTTP requests, serial port, and time-based events?

Short version of my question:
How do I design a single Python script that can listen and respond to inputs received via HTTP or a serial port, and also initiate communications via these channels on its own? My problem is that I don't understand how to design a single script that both (i) uses a web framework to listen on some port for HTTP inputs, and (ii) also does other work that's independent of incoming HTTP requests.
Long version:
I want to use Python to design a system that does the following:
Listens to a serial port for occasional reports. Specifically, I have a network of JeeNode sensors (wireless Arduino-compatible modules) that talk to a central JeeLink, which connects to my computer via USB and talks to my Python script via pySerial.
Listens to a web URL for occasional inputs. Specifically, users send commands to the system via SMS to a Twilio number. Twilio intercepts the SMS messages and posts them to a URL I designate, and I use the Bottle micro web-framework to listen for new HTTP requests.
Responds to both types (serial and HTTP) of inputs. For example, if a user texts the command "Sleep", I want to (i) tell the sensors to go to sleep via the serial port -> JeeLink (which will then forward the command onto the remotes); and (ii) reply to the sender -- and maybe other users -- that the command has been received and is being executed.
Occasionally initiates its own communications to users (via HTTP -> Twilio -> SMS) or remote sensors (via serial -> JeeLink) without any precipitating input event. Two examples: (1) I want to report out to users or remote sensors every N minutes even if I haven't received any new inputs. (2) I want to tell users remotes have actually entered Sleep mode. Because the remotes are battery-powered, they spend most of the time in an inaccessible low-power mode. They can only receive new commands from the JeeLink when they initiate a wireless "check-in" every 5 min. So while technically remotes go to sleep (or wake up, etc.) in response to a user command, commands and responses are effectively independent.
My problem is that all of usage examples of web frameworks I've seen seem to assume that all precipitating events occur via HTTP requests. I can create a Bottle object, and use decorators to bind code to that object that get executed whenever it sees an HTTP request that matches some specified URL path. But I don't know how to do that while simultaneously doing other work that's independent of HTTP events, for example, listening to the serial port.
After struggling a lot, the potential solutions I'm considering now are:
Splitting the functionality into separate scripts. A.py listens for text messages via HTTP and writes the relevant information to some database; B.py continuously reads the database for new records and reacts accordingly, as well as listening to the serial monitor and doing other work. This seems like it would work fine, but it feels inelegant, and I suspect there's a simpler solution I'm unaware of.
Maybe the answer is related to Python decorators? I use various decorators to specify the URL paths that, when a matching HTTP request comes in, execute the code bound to the decorator. So I'm guessing that maybe there's a way to specify some other kind of decorator that, rather than listening for HTTP requests, gets executed when my "main" Python code tells it to? But I don't know enough about decorators to know if this is true.
It seems like you are trying to write an asynchronous application to manage your network of nodes via HTTP. You want to respond to incoming communications on multiple channels as they occur, you want to initiate communications on a schedule, on multiple channels, and you want those two forms of communication to interact. All of these communications are with an outside world that is slow, so it behooves you not to block if you don't need to.
It will probably be easiest to maintain your system if you organize your code into several Python modules, split by their area of concern - serial interface code, HTTP interface code, common processing code-paths, etc. Weave those components together in a central control module, which imports your libraries, and knows how to start and stop cleanly. Then you can test the serial interface independent of the web interface, and potentially reuse some of those Python modules in other projects.

"NOTICE AUTH" notifications when connecting to IRC server

As a learning exercise, I'm writing a Python program to connect to a channel on an IRC network, so I can output messages in the channel to stdout. I'm using asynchat and manually sending the protocol messages, rather than using something like Twisted or existing bot code from the net - again, it's a more useful learning experience that way.
I can send JOIN and USER commands quite happily, and can PING/PONG away as required. However, I've noticed when opening a socket to port 6667, I'll receive some messages:
NOTICE AUTH :*** Looking up your hostname...
NOTICE AUTH :*** Checking ident
NOTICE AUTH :*** Found your hostname
NOTICE AUTH :*** No identd (auth) response
even if I've not yet sent the JOIN/USER commands.
So, is this opening sequence of notifications specified anywhere? As far as I can see, the RFC doesn't specify for anything in particular to happen before the client sends the JOIN command, and I wasn't sure whether to wait for receipt of these notices before sending the JOIN command, and if so how do I detect that I've received all of the notices?
There's no RFC requirement to do this, it's just a common thing that servers in the wild do. Observe that they're plain old NOTICE commands (i.e. just messages). Just treat them as messages sent to a psuedo-user "AUTH" (since the server doesn't have a better name for you yet). You're not required to wait for them, and the server is not required to send them.

Categories

Resources