I have a small application under Linux to receive an email with the use of smtpd.SMTPServer. Here is the small test code:
class CustomSMTPServer(smtpd.SMTPServer):
def process_message(self, peer, mailfrom, rcpttos, data):
print 'Receiving message from:', peer
print 'Message addressed from:', mailfrom
print 'Message addressed to :', rcpttos
print 'Message length :', len(data)
return
server = CustomSMTPServer(('0.0.0.0', 25), None)
asyncore.loop()
I have the following issues:
(1) When using this piece of code, the computer sending the email gets the following message:
502 Error: command "EHLO" not implemented
so the server cannot reply correctly to receive further data / communicate with the email-sending computer (which I assume is the client).
Shouldn't such a basic thing like EHLO be implemented in a Ubuntu installation in the first place? Why is it not implemented?
(2) I figured that EHLO can be installed by installing postfix in Ubuntu. I did that and the same test call went on, but stopped later with a different error:
Client: RCPT TO: XXX#YYY.com
Server: 554 5.7.1 <XXX#YYY>: Relay access denied
(3) At later times, after doing some more other tests, I got the error from the test code itself:
error: [Errno 98] Address already in use
It turns out that the used IP address was already in use as could be seen with
netstat -lnpt
of which the case was the running postfix. After stopping the postfix service the address was no longer in use, but of course it was back to issue #1:
502 Error: command "EHLO" not implemented
I would like to be able to use a SMTPServer to receive an email message
1. without the need to install postfix
2. with the use of asyncore
If there are any ideas of how to make this possible in an easy and simple way using python libraries that would be great!
Cheers
Alex
1) Postfix is an SMTP server, it has nothing to do with python's smtpd EHLO implementation. If you want your custom SMTP server, you don't need postfix, so feel free to remove it.
2) EHLO is a ESMTP command, not SMTP, standard smtpd python module implements SMTP, therefore it doesn't have an EHLO implementation.
Try this.
Of course, it does not implement the EHLO command, but makes it treat it the same as the HELO command. Of course, it might only get you past the first stumbling block, however if the rest of the smtp commands are compatible it might get you by:
You will probably find the smtpd.py file in /usr/lib/python2.7
def smtp_HELO(self, arg):
if not arg:
self.push('501 Syntax: HELO hostname')
return
if self.__greeting:
self.push('503 Duplicate HELO/EHLO')
else:
self.__greeting = arg
self.push('250 %s' % self.__fqdn)
#copy the above function and rename it smtp_EHLO
def smtp_EHLO(self, arg):
if not arg:
self.push('501 Syntax: HELO hostname')
return
if self.__greeting:
self.push('503 Duplicate HELO/EHLO')
else:
self.__greeting = arg
self.push('250 %s' % self.__fqdn)
Also, I note the python3.5 version of the same library looks like it supports EHLO, so maybe you could try and use python3. But apparently python3 is not backwards compatible it seems - so good luck.
Related
In my tool the users can provide a mail backend using certain infos on a model and send their mails via the backend which gets created from those values. This all works, but I would love to have a quick check if the provided backend actually will work before using it. Using something like this check_mail_connection doesn't work as this actually returns False even though I entered valid connection parameters.
from django.core.mail import get_connection
class User(models.Model):
...
def get_mail_connection(self, fail_silently=False)
return get_connection(host=self.email_host,
port=self.email_port,
username=self.email_username,
password=self.email_password ... )
def check_mail_connection(self) -> bool:
from socket import error as socket_error
from smtplib import SMTP, SMTPConnectError
smtp = SMTP(host=self.email_host, port=self.email_port)
try:
smtp.connect()
return True
except SMTPConnectError:
return False
except socket_error:
return False
I don't want to send a test mail to confirm, as this can easily get lost or fail on a different part of the system. This feature is for sending out emails from the users mail servers, as I suspect most of my users have a mail server anyways and I basically offer white labeling and similar stuff to them.
You have the following line smtp.connect() in your code that attempts to make a connection. If you look at the documentation for smtplib the signature for this method is:
SMTP.connect(host='localhost', port=0)
Meaning you are trying to connect to localhost with port 25 (standard SMTP port). Of course there is no server listening there and you get a ConnectionRefusedError which you catch and return False. In fact you don't even need to call connect because the documentation states:
If the optional host and port parameters are given, the SMTP
connect() method is called with those parameters during
initialization.
Hence you can simply write:
def check_mail_connection(self) -> bool:
from smtplib import SMTP
try:
smtp = SMTP(host=self.email_host, port=self.email_port)
return True
except OSError:
return False
You can also simply use the open method of the email backend's instance rather than creating the SMTP instance and calling connect yourself:
def check_mail_connection(self) -> bool:
try:
email_backend = self.get_mail_connection()
silent_exception = email_backend.open() is None
email_backend.close()
return not silent_exception
except OSError:
return False
I have a few questions for you and would like for you to answer these questions before we can go further.
What type of OS are you running the server on?
What mail client and tutorial did you follow? Postfix?
Can a user on the server send local mail to another user on the server?
What ports are open and what type of security features do you have installed?
What did your logs say when the email failed?
Are you self hosting/ are you acting as the server admin?
(It's fine if this is your first time. Everyone had a first day.)
SSL and A FQDN isn't too important if your just sending mail out. The system will still work, you just won't be able to receive mail.
(I'm talking from the sense of making sure it at least will send an email. You should at least use SSL as it can be gotten for free.)
If you checked all of these things, there is a part of the mail client that you are using and it probably won't send mail out unless it has approval. There are a lot of variables.
All of these things matter or it wont work.
Sorry meant to make this as a comment. I'm not use to speaking on here.
What am I doing wrong here? I'm trying to use Stomp to test some things with Artemis 2.13.0, but when I uses either the command line utility of a Python script, I can't subscribe to a queue, even after I use the utility to publish a message to an address.
Also, if I give it a new queue name, it creates it, but then doesn't pull messages I publish to it. This is confusing. My actual Java app behaves nothing like this -- it's using JMS
I'm connection like this with the utility:
stomp -H 192.168.56.105 -P 61616 -U user -W password
> subscribe test3.topic::test.A.queue
Which give me this error:
Subscribing to 'test3.topic::test.A.queue' with acknowledge set to 'auto', id set to '1'
>
AMQ229019: Queue test.A.queue already exists on address test3.topic
Which makes me think Stomp is trying to create the queue when it subscribes, but I don't see how to manage this in the documentation. http://jasonrbriggs.github.io/stomp.py/api.html
I also have a Python script giving me the same issue.
import os
import time
import stomp
def connect_and_subscribe(conn):
conn.connect('user', 'password', wait=True)
conn.subscribe(destination='test3.topic::test.A.queue', id=1, ack='auto')
class MyListener(stomp.ConnectionListener):
def __init__(self, conn):
self.conn = conn
def on_error(self, headers, message):
print('received an error "%s"' % message)
def on_message(self, headers, message):
print('received a message "%s"' % message)
"""for x in range(10):
print(x)
time.sleep(1)
print('processed message')"""
def on_disconnected(self):
print('disconnected')
connect_and_subscribe(self.conn)
conn = stomp.Connection([('192.168.56.105', 61616)], heartbeats=(4000, 4000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
time.sleep(1000)
conn.disconnect()
I recommend you try the latest release of ActiveMQ Artemis. Since 2.13.0 was released a year ago a handful of STOMP related issues have been fixed specifically ARTEMIS-2817 which looks like your use-case.
It's not clear to me why you're using the fully-qualified-queue-name (FQQN) so I'm inclined to think this is not the right approach, but regardless the issue you're hitting should be fixed in later versions. If you want multiple consumers to share the messages on a single subscription then using FQQN would be a good option there.
Also, if you want to use the topic/ or queue/ prefix to control routing semantics from the broker then you should set the anycastPrefix and multicastPrefix appropriately as described in the documentation.
This may be coincidence but ARTEMIS-2817 was originally reported by "BENJAMIN Lee WARRICK" which is surprisingly similar to "BenW" (i.e. your name).
I have a script that I need to run indefinitely. The script is set to e-mail me confirmations of certain steps being completed on a daily basis. I am trying to use smtplib for this.
The initial connection is set up so that I will enter my login (written into the script) and a password using getpass at the very initiation of the script. I do not want to leave my password written into the script or even reference by the script say in a config file. Therefore, I want to enter the password at initiation and leave the smtp connection in place.
Re-connecting to the smtp connection as required in the script would defeat the point of being able to step away from the script entirely and leave it running indefinitely.
The example code that I am working with at the moment looks like this:
import smtplib
import getpass
smtpObj = smtplib.SMTP('smtp.gmail.com',587)
smtpObj.ehlo()
smtpObj.starttls()
smtpObj.login('myemail#gmail.com',password = getpass.getpass('Enter Password: '))
Then I enter the password and the output is:
(235, b'2.7.0 Accepted')
So this all works fine.
The problem is then that the script needs to pause for anywhere from a few minutes to potentially a few days depending on the time. This is achieved using a while loop with time conditions until a certain time when the send function will be called:
smtpObj.sendmail('myemail#gmail.com','recipient#gmail.com','This is a test')
However, after a period of about 20/30 mins it seems (i.e. if the pause is sufficient). Then the smptObj.sendmail call will fail due to a timeout error.
The specific error is as follows:
SMTPSenderRefused: (451, b'4.4.2 Timeout - closing connection. l22sm2469172wre.52 - gsmtp', 'myemail#gmail.com')
I have so far tried the following:
Instantiating the connection object with the following timeout parameterisation:
smtpObj = smtplib.SMTP('smtp.gmail.com',587,timeout=None)
smtpObj = smtplib.SMTP('smtp.gmail.com',587,timeout=86400)
Neither of these seem to supress the 'timeout' of the connection (i.e. the same problem persists).
I have also tried this solution approach suggested in this post:
How can I hold a SMTP connection open with smtplib and Python?
However, this has not worked either!
I do want to try and avoid the solution where I would have to re-connect each time when I wanted to send the e-mail, because I only want to enter the password for the connection the once manually, rather than writing it into the script either directly or indirectly.
There surely is a way to deal with the timeout issue! If anyone can help here, then please let me know! Though, if you think that the more 'obvious' solution of re-connecting just before the script needs to send an e-mail is the better way to go, then please let me know.
Thank you!...
If you don't want to include sensitive credentials in a script, you should use env vars.
From a terminal shell (outside of python):
$ EXPORT secretVariable=mySecretValue
$ echo $secretVariable
$ mySecretValue
$
So to leverage this in your code...
>>> import os
>>> myPW = os.getenv('secretVariable')
>>> myPW
'mySecretVal'
>>>
By doing this, you don't have to manually type in the password. Beyond that, it's not very practical to attempt to leave an idle SMTP connection open for potentially days at a time, just implement a try/except structure..
import smtplib
import os
def smtp_connect():
# Instantiate a connection object...
password = os.getenv('secretVariable')
smtpObj = smtplib.SMTP('smtp.gmail.com',587)
smtpObj.ehlo()
smtpObj.starttls()
smtpObj.login('myemail#gmail.com',password=password)
return smtpObj
def smtp_operations():
try:
# SMTP lib operations...
smtpObj.sendmail('myemail#gmail.com','recipient#gmail.com','This is a test')
# SMTP lib operations...
except Exception: # replace this with the appropriate SMTPLib exception
# Overwrite the stale connection object with a new one
# Then, re-attempt the smtp_operations() method (now that you have a fresh connection object instantiated).
smtpObj = smtp_connect()
smtp_operations()
smtpObj = smtp_connect()
smtp_operations()
By replacing except Exception with the actual SMTP Exception that gets raised when you have a stale connection, you'll be sure you're not catching exceptions that don't pertain to the connection being stale.
So, using try/except, the script will attempt to preform the SMTP operations. If the connection is stale, it will instantiate a fresh connection object and then attempt to re-execute itself with the fresh connection object.
If I run this code:
import sshtunnel
try:
with sshtunnel.open_tunnel("hello", ssh_username="user",
ssh_password="PASSWORD",
remote_bind_address=("1.2.3.4", 23)) as server:
pass
except:
pass
I get this:
2016-04-06 10:47:53,006 | ERROR | Could not resolve IP address for hello, aborting!
I am ignoring the exception, but some random line is showing up for some reason. Why? Is this just some random print statement in some library somewhere? Is this common? Seems like libraries shouldn't really be printing anything to the screen directly. How do I suppress this line?
PS. Code meant to simply replicate the error - obviously using a catch-all for exceptions and doing nothing with them is bad
That looks like a logging statement, specifically logging.error().
It's going to the screen because you haven't set up a log handler which would send it somewhere else. See https://docs.python.org/2/library/logging.html for more information.
It's going to the standard error output (which on a terminal window looks the same as the regular output.) If your code were part of a web service, it would go to the web server's error log.
The first non-keyword argument you pass to open_tunnel is expected to be the destination server (either a string, or an (ip, port) tuple (see the function's docstring).
Eventually, this leads to ssh_host being set to "hello" in the example you gave, which logs an error message in this except block.
"some random line is showing up for some reason." - its a straight-forward error message... the remote server couldn't find a host called "hello". As for why you see it, sshtunnel creates a console logger for error messages if you don't pass a logger yourself. That is a strange thing to do, IMHO. open_tunnel accepts two keywoard arguments: logger is a standard python logger and debug_level is the level to log. See python logging for details on setting up a logger.
The part where you have "Hello" should ideally have the IP address of the SSH Server that you are connecting to. The random line that you mentioned is the ERROR statement that explains it.
Although the program does not have any stdout statements, the ERROR line is from the library sshtunnel.
The statement block in the library uses a raise(exeption) statement with this particular error message. raise function is used to populate messages that can be captured by except statements.
TL;DR
add threaded=False
Hello I had exactly the same ERROR
and it appears after the closing of the tunnel (release):
2018-01-16 10:52:58,685| INFO | Shutting down tunnel ('0.0.0.0', 33553)
2018-01-16 10:52:58,764| INFO | Tunnel: 0.0.0.0:33553 <> sql_database_resolved_on_remote:1433 released
2018-01-16 10:52:58,767| ERROR | Could not establish connection from ('127.0.0.1', 33553) to remote side of the tunnel
2018-01-16 10:52:58,889| DEBUG | Transport is closed
That's somewhat normal, but why it's there?
"Solution-ish"
Add threaded=False.
Both tested with open_tunnel() and SSHTunnelForwarder in with ... as tunnel: context:
import time
import socket
from sshtunnel import SSHTunnelForwarder, open_tunnel
def test_tunnel(tunnel):
# Wait for tunnel to be established ?
#tunnel.check_tunnels()
#time.sleep(0.5)
#print tunnel.tunnel_is_up
s = socket.socket()
s.settimeout(2)
for i in range(0, 10):
"""
I create a new socket each time otherwise I get
the error 106 (errno.EISCONN):
'Transport endpoint is already connected'
"""
s = socket.socket()
s.settimeout(2)
state = s.connect_ex(('localhost', tunnel.local_bind_port))
s.close()
okoko = "OK" if state == 0 else "NO"
print "%s (%s)" % (okoko, state)
with open_tunnel(
'server_i_can_ssh_with_mykey',
ssh_username='myuser',
ssh_pkey='/home/myuser/.ssh/id_rsa',
ssh_private_key_password=None, #no pwd on my key
remote_bind_address=('sql_database_resolved_on_remote', 1433), #e.g.
debug_level=10, # remove this if you test with SSHTunnelForwarder
threaded=False,
) as tunnel:
test_tunnel(tunnel)
hth
I have this simple minimal 'working' example below that opens a connection to google every two seconds. When I run this script when I have a working internet connection, I get the Success message, and when I then disconnect, I get the Fail message and when I reconnect again I get the Success again. So far, so good.
However, when I start the script when the internet is disconnected, I get the Fail messages, and when I connect later, I never get the Success message. I keep getting the error:
urlopen error [Errno -2] Name or service not known
What is going on?
import urllib2, time
while True:
try:
print('Trying')
response = urllib2.urlopen('http://www.google.com')
print('Success')
time.sleep(2)
except Exception, e:
print('Fail ' + str(e))
time.sleep(2)
This happens because the DNS name "www.google.com" cannot be resolved. If there is no internet connection the DNS server is probably not reachable to resolve this entry.
It seems I misread your question the first time. The behaviour you describe is, on Linux, a peculiarity of glibc. It only reads "/etc/resolv.conf" once, when loading. glibc can be forced to re-read "/etc/resolv.conf" via the res_init() function.
One solution would be to wrap the res_init() function and call it before calling getaddrinfo() (which is indirectly used by urllib2.urlopen().
You might try the following (still assuming you're using Linux):
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
res_init = libc.__res_init
# ...
res_init()
response = urllib2.urlopen('http://www.google.com')
This might of course be optimized by waiting until "/etc/resolv.conf" is modified before calling res_init().
Another solution would be to install e.g. nscd (name service cache daemon).
For me, it was a proxy problem.
Running the following before import urllib.request helped
import os
os.environ['http_proxy']=''
response = urllib.request.urlopen('http://www.google.com')