I am working on an application that will use mqtt. I will be using the python library. I have been leaning towards using mosquitto but can find no way of programmatically setting access control limits for it. The application I'm writing needs to be able to differentiate between users, and only allow them to subscribe to certain topics. The current solution looks like this is done from a config file. Is there a scalable solution to access control limits in mosquitto? If not, do you know of a mqtt broker in which this exists?
Even if this might not concern you anymore, others could find it useful. I am following here mosquitto's man page.
There are two configuration files, a general one, say mosquitto.conf, and an ACL (Access Control List) one, say acl.conf.
mosquitto.conf enables the acl.conf file for access control:
acl_file acl.conf
acl.conf defines the access control behavior:
# users can anonymously publish to the topic 'in'
topic write in
# users can subscribe topics named 'out/%u', where %u is the user's name
pattern read out/%u
# an admin may subscribe to 'in'
# and publish to all subtopics of 'out/' (note the +)
user adminWithSecretName
topic read in
topic write out/+
We execute mosquitto -c mosquitto.conf to run mosquitto with the configuration file.
In this case, a dynamic authentication mechanism can be established by using randomly generated user names.
Example: Alice wants to subscribe so that she can read here private messages. She sends her credentials in combination with a nonceN1 to in. Furthermore, she also subscribes the topic out/N1, using N1 as user name. The pattern read out/%u allows that.
A third-party server application, connected as adminWithSecretName and subscribed to the topic in receives Alice' message. It verifies its authenticity and then generates a new nonce N2 and publishes it to out/N1 where Alice has subscribed to.
From now on -- at least for this session -- out/N2 is the regular topic where Alice respectively here devices will receive messages. Therefore, Alice unsubscribes and disconnects form out/N1 and subscribes to out/N2. The third-party server application publishes all new messages, belonging to Alice, to the topic out/N2.
Further Considerations: One may also want to reflect on other aspects of security such as TLS and/or per-message encryption. The configuration discussed here would, depending on the grade of targeted security/privacy, probably also require TLS.
On the other hand, this could be obsolete if the messages are encrypted separately. One, say Eve, could intercept (even subscribe!) the messages if she had access to the cable/WiFi stream, as she would see the secret user name as plain text. But: when one already has access to the data stream, he/she can intercept the bytes anyway. They are encrypted either way, using TLS or per-message encryption. Also, traffic analysis may be applied to both approaches.
I would suggest to use either TLS or per-message encryption. Both should, correctly implemented and applied, lead to comparable security.
You could write a plugin to handle this for you. See http://mosquitto.org/2013/07/authentication-plugins/ for some examples.
You may find more answers if you ask on the mosquitto mailing list.
If you are familiar with Java you should try the HiveMQ MQTT broker: http://www.hivemq.com.
There is an open PluginSDK, which enables you to write any kind of extensions to the broker.
You can implement the authentication or authorization method that fits your use case best, for example from database, file...
The authorization based on topic is a common use case and there is an example in the HiveMQ Plugin Guide.
As entry point into HiveMQ plugin development see the Get started with Plugins page: http://www.hivemq.com/documentations/getting-started-plugins/
Disclosure: I'm one of the developers of HiveMQ.
Related
I'd like to fetch mails from a server, but I also want to control when to delete them.
Is there a way to do this?
I know this setting is very usual in mail clients, but it seems this option is not well supported by POPv3 specification and/or server implementations.
(I'm using python, but I'm ok with other languages/libraries, Python's poplib seems very simplistic)
Most POP3 clients may delete successfully retrieved messages automatically, but that's a feature of the client itself, not the protocol. POPv3 supports four basic operations during the transaction phase of a session:
Listing all available messages in the mailbox. (LIST)
Retrieving a specific message (RETR)
Flagging a message for deletion (DELE)
Clearing all deletion flags (RSET)
After the client ends the session with the QUIT command, any messages still flagged for deletion are deleted during the update phase. Note, though, that the RETR command (based on my reading of RFC1939 does not flag a message for deletion; that needs to be done explicitly with the DELE command.
Note, however, that a particular POP3 server may have a policy of deleting retrieved messages, whether or not the client requested they be deleted. Whether such a server provides an operation to bypass that is beyond the scope of the protocol. (A discussion of this point is mentioned in section 8 of the RFC, but is not part of the protocol itself.)
POP3 by design downloads and removes mail from a server after it's successfully fetched. If you don't want that, then use the IMAP protocol instead. That protocol has support to allow you to delete mail at your leisure as opposed to when it's synced to your machine.
I'm trying to use paho.mqtt for python (project pages) and all works nice. The only problem I have is I would find it very useful to find out who had sent the message. I looked up the source code but could not quite get my head around if the client variable passed within on_message is the client I use to connect to or details of the client who published the message (I'm guessing it's the first option).
So the question is - is it possible to find out who (the user name) had sent the message?
The MQTT protocol was designed to be as light weight as possible, this means that the message header contains the absolute bare minimum to deliver a message to a specific topic. There is no room in the header for anything else.
MQTT is also a Pub/Sub protocol, one of the key features of this type of protocol is to decouple the publisher from the subscriber as much as possible. This means that the publisher shouldn't care how many subscribers there are and subscribers shouldn't care where the information comes from as long as it is to a topic it's interested in.
If you want any more information other than the message topic then you have to add it to the payload yourself.
Although using Python, I set the headers as described here would result in flagging my e-mails as spam by SpamAssassin.
I am sending reminders for un-paid invoices, so I would like to do anything in my power to make the receiver aware of the e-mail - but this cannot happen if my e-mail ends up in the spam folder due to the urgent flag.
Using the X-MSMail-Priority in the header would add a positive spam score from MISSING_MIMEOLE, using X-Priority would add a positive score. Only using Priority in the header are not implemented in Thunderbird Mail client nor in RoundCube webinterface, thus the urgency is not shown.
What can I do to make my e-mails urgent, but simultaneously make SpamAssassin (and other filters) happy?
You cannot control how others filter their spam. If you find that anything in your mail triggers common spam filters, you have to remove that if it is important that your users get those emails. The priority header is abused by spammers, so you cannot use that.
Likewise, I would expect any other unauthenticated priority indicator to be abused by spammers, so there won’t be any way. Possibly signing these headers using DKIM and deploying DMARC (with a strict policy) on your domain might help, but I do not know for sure that filters such as spamassassin are smart enough to consider the priority header authenticated in such cases.
Deploying DMARC might be a good idea anyways for transactional mail, to prevent spoofing.
I have a webapp with some functionality that I'd like to be made accessible via an API or webservice. My problem is that I want to control where my API can be accessed from, that is, I only want the apps that I create or approve to have access to my API. The API would be a web-based REST service. My users do not login, so there is no authentication of the user. The most likely use case, and the one to work with now, is that the app will be an iOS app. The API will be coded with django/python.
Given that it is not possible to view the source-code of an iOS app (I think, correct me if I'm wrong), my initial thinking is that I could just have some secret key that is passed in as a parameter to the API. However, anyone listening in on the connection would be able to see this key and just use it from anywhere else in the world.
My next though is that I could add a prior step. Before the app gets to use API it must pass a challenge. On first request, my API will create a random phrase and encrypt it with some secret key (RSA?). The original, unencrypted phrase will be sent to the app, which must also encrypt the phrase with the same secret key and send back the encrypted text with their request. If the encryptions match up, the app gets access but if not they don't.
My question is: Does this sound like a good methodology and, if so, are there any existing libraries out there that can do these types of things? I'll be working in python server-side and objective-c client side for now.
The easiest solution would be IP whitelisting if you expect the API consumer to be requesting from the same IP all the time.
If you want to support the ability to 'authenticate' from anywhere, then you're on the right track; it would be a lot easier to share an encryption method and then requesting users send a request with an encrypted api consumer handle / password / request date. Your server decodes the encrypted value, checks the handle / password against a whitelist you control, and then verifies that the request date is within some timeframe that is valid; aka, if the request date wasnt within 1 minute ago, deny the request (that way, someone intercepts the encrypted value, it's only valid for 1 minute). The encrypted value keeps changing because the request time is changing, so the key for authentication keeps changing.
That's my take anyways.
In addition to Tejs' answer, one known way is to bind the Product ID of the OS (or another unique ID of the client machine) with a specific password that is known to the user, but not stored in the application, and use those to encrypt/decrypt messages. So for example, when you get the unique no. of the machine from the user, you supply him with password, such that they complete each other to create a seed X for RC4 for example and use it for encryption / decryption. this seed X is known to the server as well, and it also use it for encryption / decryption. I won't tell you this is the best way of course, but assuming you trust the end-user (but not necessarily any one who has access to this computer), it seems sufficient to me.
Also, a good python library for cryptography is pycrypto
On first request, my API will create a random phrase and encrypt it with some secret key (RSA?)
Read up on http://en.wikipedia.org/wiki/Digital_signature to see the whole story behind this kind of handshake.
Then read up on
http://en.wikipedia.org/wiki/Lamport_signature
And it's cousin
http://en.wikipedia.org/wiki/Hash_tree
The idea is that a signature can be used once. Compromise of the signature in your iOS code doesn't matter since it's a one-use-only key.
If you use a hash tree, you can get a number of valid signatures by building a hash tree over the iOS binary file itself. The server and the iOS app both have access to the same
file being used to generate the signatures.
I'm looking into a possible feature for my little to-do application... I like the idea that I can send an email to a particular email address, containing a to-do task I need to complete, and this will be read by my web application and be put in the database... So, when I come to log into my application, the to-do task I emailed will be there as a entry in the app.
Is this possible? I have a slice with SliceHost (basically a dedicated server) so I have total control on what to install etc. I'm using Python/Django/MySQL for this.
Any ideas on what steps to take to make this happen?
If I were to implement this, I'd use a scheduler and a job to be scheduled.
That job would connect to the mail server (be it POP3 or IMAP) and parse the unread messages (or messages unread by the job). Based on that I would insert that record.
You'd get 2 types of records that way. A list of mail message ids which have been processed (so you don't reprocess mails) and a list of tasks.
Disadvantage is that it takes some time before you see the task, as the job only executes every X minutes, or seconds.
If that is not good enough I'd go for a permanent IMAP connection, but you'd have to implement more error handling; you don't just retry automatically every X minutes.
Googling for Django +scheduler will get you started.
also have a look at this StackOverflow thread, no need to reinvent the wheel :)
I needed the exact same thing. I use the Lamson project (which is written in python) to transform email, forward email based on rules to my www.evernote.com and thinking rock www.trgtd.com.au accounts, update firewall web filtering rules, update allow/deny lists for my spam filter, read and write databases etc....
I like to think of it as email server automation and email application development.
www.lamsonproject.org
Troy
One way that I've solved this in the past was using qmail's .qmail files (docs).
Basically you set up qmail and point your email address (for ease of use, lets assume proc#whatever.com is your email address) to your home directory. In that directory you set up a .qmail-proc file to handle the mail.
This allows you to use a full-fledged SMTP server on your server, including spam filtering, forwarding, aliases, all that fun stuff. You can then pipe the data from an email into an application. In your case, I would suggest making a Mangement Command in Django to process the email (I'll call it proc_email). Thus your .qmail-proc may look like:
/var/spool/mail/proc
| /www/django/myproject/manage.py proc_email
This stores a copy of the email in /var/spool/mail/proc, then passes the email to the script in the second line. The email itself is passed to proc_email via sys.stdin. Simply read the email from there, and store it through your Django Models.
If you need to process email for different addresses later, you can also set up aliases which point to your home directory, and use .qmail-<username> files for each alias. Allowing you to pass other flags (such as the username for each alias) to proc_email if needed.
I should note that this isn't the simplest solution, but it can scale, and is pretty darn bullet proof.
I would not focus on Django for this.
I would create a mail server to catch these emails. Use http://docs.python.org/library/smtpd.html.
I would then use just the Django ORM to update the database based on the emails received.