Python Email via MS Exchange: Message Submission Rate Limit - python

I have written a Python script that iterates through rows of an Excel file and, for each row:
Gets an e-mail address, name, and name of attachment file to use
Composes an e-mail
Sends out the e-mail
I'm not sure if it's accurate to call this mass-emailing or if it is a candidate for being black-listed because it is sending out individualized e-mails. With a message submission rate of 5/minute, I want to throttle it (or have the limit increased to 100).
So my question is: Is the sort of scenario, assuming the limit is increased to 100, prone to black-listing?

It depends on how well you know the people you're mailing.
If you know them pretty well, it should be fine. If they're total strangers, the recipients might think it's spam and start blocking you.
I could help more if you told me how well you know the recipients.

Its not easy to answer your question as it depends hardly on the remote email environment used here and what you understand with individualized emails (only a different "Hello Mr. ZZ" or "Dear Ms. YY" isn´t really an individualized email these days). To give you three possible examples:
Situation 1:
All users are on the same email environment (e.g. Exchange Online / Office 365). Then the remote mail server might see here +100 similar emails and might mark them as spam. If all +100 users are on +100 different email servers that might be different however the following might be possible:
Situation 2:
One user think that this email is spam and report that as spam. Depending on the AntiSpam engine used here a hash value from that email might be created and other email server using the same AntiSpam engine might therefore detect your email as spam as well.
Situation 3:
The users are on different email environments but in front of them is an AntiSpam cloud solution. This solution will then see +100 similar emails from one eMail environment and might therefore clarify that as SPAM.
Offtopic: You might consider using services like from MailChimp which use different email servers to spread out a similar email. This might help to prevent such issues as the mass emails aren´t send from only one server. On top of that you do not risk your own email server from being blacklisted which might have a very bad business impact on your company.

Related

Why does SendGrid allow me to send emails from any address?

I have a local python file that I'm using to send emails through sendgrid's SMTP:
gmail_sender = "example#gmail.com"
server_username = "apikey"
server_password = prod.CONFIG['sendgrid_SMTP']
server = smtplib.SMTP_SSL('smtp.sendgrid.net', 465)
server.login(server_username, server_password)
email_information['From'] = gmail_sender
server.sendmail(email_information['From'], email_information['To'],
email_information.as_string())
I'm confused about who is sending the email. I replaced gmail_sender with multiple different emails, and without having to give the password to those emails, I could send an email through sendgrid's SMTP. In the from section of the email I sent, it says the email I put as the gmail_sender plus "via sendgrid.net." I can make it seem like anyone sent the email, isn't this a security concern?
Any guidance is appreciated :)
The alternative is rather daunting. You would have to technically prove to them that every address you want to send from is actually yours.
Some services require you to prove that a domain is yours by giving you a unique cookie and telling you to publish it in the domain's DNS records. If you have control over the DNS for a domain, you have the control over the domain. But there is no similar mechanism for email - you could simply forge the sender on the email which is supposed to prove that you own the address.
Anyway, going through this ordeal for every domain you want to use is already a chore. Imagine what it would mean for clients who want to use dozens, hundreds, or even thousands of different sender addresses.
The Sendgrid terms of service have some general language about network abuse, which probably apply to using somebody else's email address. I could find nothing specific about address forgery in their ToS. Having a legal restriction in a contract (and enforcing it!) relieves them from the need to implement a technical restriction.

Send urgent/high priority e-mails automatically

Although using Python, I set the headers as described here would result in flagging my e-mails as spam by SpamAssassin.
I am sending reminders for un-paid invoices, so I would like to do anything in my power to make the receiver aware of the e-mail - but this cannot happen if my e-mail ends up in the spam folder due to the urgent flag.
Using the X-MSMail-Priority in the header would add a positive spam score from MISSING_MIMEOLE, using X-Priority would add a positive score. Only using Priority in the header are not implemented in Thunderbird Mail client nor in RoundCube webinterface, thus the urgency is not shown.
What can I do to make my e-mails urgent, but simultaneously make SpamAssassin (and other filters) happy?
You cannot control how others filter their spam. If you find that anything in your mail triggers common spam filters, you have to remove that if it is important that your users get those emails. The priority header is abused by spammers, so you cannot use that.
Likewise, I would expect any other unauthenticated priority indicator to be abused by spammers, so there won’t be any way. Possibly signing these headers using DKIM and deploying DMARC (with a strict policy) on your domain might help, but I do not know for sure that filters such as spamassassin are smart enough to consider the priority header authenticated in such cases.
Deploying DMARC might be a good idea anyways for transactional mail, to prevent spoofing.

How to load dummy/fake/mock emails and email folders to a shared IMAP account to test MUA configs?

Summary
How do I create a set of on-demand, mock/dummy/fake emails in numerous
IMAP folders? Email content needs to be share-able in a public forum via
a commonly-access IMAP-server account (typically for testers/developers
trying to debug MUA problems/configurations), with no privacy
risks.
I haven't yet found a solution to do this. Unless I can find something
(I'm totally looking for suggestions), I'm relegated to writing my own
software client. If so, I'd do it in Python, and I'm looking for general
pointers on which tools/libraries/methods/approaches I should employ to
most-quickly get a first, working prototype.
How should I solve the above, given the context below?
Purpose
I want to test various MUA deployments, sharing the same IMAP
account between many users/testers/developers (of any MUA) in a
public arena. Example: I might ask on a Notmuch email list:
"Why is my mbsync/Notmuch
config not working? Here's a shared gmail.com account we can
collectively use as a common IMAP server to minimize server-side
variables and thus collectively help debug stuff."
IMAP-Client Requirements
The IMAP-client program:
must be able to create a variable number of nested IMAP folders with any
number emails,
must prove all email content and folder names are share-able,
with no privacy concerns (any reasonable content will do,
and it doesn't have to make sense; eg #1: variants of Lorem
ipsum might work; eg #1:
provide input for 3 or more example emails, provided by the user/caller)
so long as the emails can be opened and read by MUAs, and their
attachments are "real" enough to be opened by the attachment-file's
corresponding application,
will include some number of emails with 1 or more attachments,
will optimally (but not required for initial versions) be capable
of generating GB's of content by creating many thousands of emails in
hundreds of nested IMAP folders. The client can leverage many or large
file attachments will help do this.
must be able to do all of the above on-demand, given any new/fresh
IMAP-server account credentials.
As an implementation shortcut, it's ok for the client to duplicate
much/most of the email conent, so long as there's significant variance
in Date:, To:, From:, and Subject: headers and email-folder names (all
of which are presumably easy to "randomize").
More Details
I've pondered trying to non-private-ize existing emails/folders from
IMAP accounts I already have (that serve the above requirements), but
that work appears way too hard. Too much personal, sensitive information
would need to be "converted"/"private-ized." However, I'd like to hear
options for ways to easily privatize (scramble, encrypt, something?)
this existing email content. Such a path might save me having to write
the software.
The only way I see to solve this properly: leverage an IMAP client
program (again, I'm presumably writing it) that can create emails and
email folders on any designated IMAP server/account. Program input can
include example (presumably private) email content, number of folders
and nesting levels, randomness, date ranges (of emails), etc.
I've not yet found anything that does this.
GreenMail appears to setup
the IMAP server, but not the IMAP server content--unless I'm overlooking
something?

Parsing emails as soon as they are received

I have users sending emails with some text I need to extract. Each user's email is mapped to a single mailbox. I'm currently using a cron job that polls the mailbox (postfix) every 5 minutes, checks for new messages, and sends it to a queue where I have workers parse them. I have two main questions:
Is there a way I can parse the email as soon as it's received instead of
polling the server? Also, how could
I implement this to be scalable? For
example, if there are 50 incoming
messages per second.
I'm programatically writing each user's email address to point to mailbox in the postfix configuration file. Would it be better to create a catch all account, so I don't have to write each email address? However, I know catch-all accounts are more susceptible to spam.
Use a pipe alias to catch the email, then use celery to dump it into a MQ for processing.
Yes, this can be done quite easily. All you need to do is configure the postfix to forward email to a script instead of to a mailbox. It does not really have to be a catch-all, you can configure postfix to forward specific emails to a script. The script can be written in any language. I wrote such script in php a couple of times. Another possibility for a very busy server, like 50 emails per second is to write your own filter server, then configure postfix to pass each message to your filter.
TO forward email to a script, in aliases file put a line like this: the path must point to this file
someaccount |/usr/local/bin/emailParser.php
To forward emails to a filter, it has to be configured in master.cf, a little more difficult.
I would recommend using Procmail for this. It is specifically designed to process your incoming mail and you can pass all mail with a certain property to your app.
http://www.procmail.org/
The spam problem with catchall addresses can usually be solved quite easily by monitoring all mail on the machine. If multiple addresses recieve the same mail, than there's a high probability that it's spam.

In Django, I want to insert a database record by sending myself an email?

I'm looking into a possible feature for my little to-do application... I like the idea that I can send an email to a particular email address, containing a to-do task I need to complete, and this will be read by my web application and be put in the database... So, when I come to log into my application, the to-do task I emailed will be there as a entry in the app.
Is this possible? I have a slice with SliceHost (basically a dedicated server) so I have total control on what to install etc. I'm using Python/Django/MySQL for this.
Any ideas on what steps to take to make this happen?
If I were to implement this, I'd use a scheduler and a job to be scheduled.
That job would connect to the mail server (be it POP3 or IMAP) and parse the unread messages (or messages unread by the job). Based on that I would insert that record.
You'd get 2 types of records that way. A list of mail message ids which have been processed (so you don't reprocess mails) and a list of tasks.
Disadvantage is that it takes some time before you see the task, as the job only executes every X minutes, or seconds.
If that is not good enough I'd go for a permanent IMAP connection, but you'd have to implement more error handling; you don't just retry automatically every X minutes.
Googling for Django +scheduler will get you started.
also have a look at this StackOverflow thread, no need to reinvent the wheel :)
I needed the exact same thing. I use the Lamson project (which is written in python) to transform email, forward email based on rules to my www.evernote.com and thinking rock www.trgtd.com.au accounts, update firewall web filtering rules, update allow/deny lists for my spam filter, read and write databases etc....
I like to think of it as email server automation and email application development.
www.lamsonproject.org
Troy
One way that I've solved this in the past was using qmail's .qmail files (docs).
Basically you set up qmail and point your email address (for ease of use, lets assume proc#whatever.com is your email address) to your home directory. In that directory you set up a .qmail-proc file to handle the mail.
This allows you to use a full-fledged SMTP server on your server, including spam filtering, forwarding, aliases, all that fun stuff. You can then pipe the data from an email into an application. In your case, I would suggest making a Mangement Command in Django to process the email (I'll call it proc_email). Thus your .qmail-proc may look like:
/var/spool/mail/proc
| /www/django/myproject/manage.py proc_email
This stores a copy of the email in /var/spool/mail/proc, then passes the email to the script in the second line. The email itself is passed to proc_email via sys.stdin. Simply read the email from there, and store it through your Django Models.
If you need to process email for different addresses later, you can also set up aliases which point to your home directory, and use .qmail-<username> files for each alias. Allowing you to pass other flags (such as the username for each alias) to proc_email if needed.
I should note that this isn't the simplest solution, but it can scale, and is pretty darn bullet proof.
I would not focus on Django for this.
I would create a mail server to catch these emails. Use http://docs.python.org/library/smtpd.html.
I would then use just the Django ORM to update the database based on the emails received.

Categories

Resources