Briefing
I am currently building a python SMTP Mail sender program.
I added a feature so that the user would not be able to log in if there was no active internet connection, I tried many solutions/variations to make the real time connection checking as swift as possible, there were many problems such as:
The thread where the connection handler was running suddenly lagged when I pulled out the ethernet cable ( to test how it would handle the sudden disconnect )
The whole program crashed
It took several seconds for the program to detect the change
My current solution
I set up a data handling class which would contain all the necessary info ( the modules needed to share info effectively )
import smtplib
from socket import gaierror, timeout
class DataHandler:
is_logged_in = None
is_connected = None
server_conn = None
user_address = ''
user_passwd = ''
#staticmethod
def try_connect():
try:
DataHandler.server_conn = smtplib.SMTP('smtp.gmail.com', 587, timeout=1) # The place where the connection is checked
DataHandler.is_connected = True
except (smtplib.SMTPException, gaierror, timeout):
DataHandler.is_connected = False # Connection status changed upon a connection error
I put a connection handler class on a second thread, the server connection process slowed down the gui when it was all on one thread.
from root_gui import Root
import threading
from time import sleep
from data_handler import DataHandler
def handle_conn():
DataHandler.try_connect()
smtp_client.refresh() # Refreshes the gui according to the current status
def conn_manager(): # Working pretty well
while 'smtp_client' in globals():
sleep(0.6)
try:
handle_conn() # Calls the connection
except NameError: # If the user quits the tkinter gui
break
smtp_client = Root()
handle_conn()
MyConnManager = threading.Thread(target=conn_manager)
MyConnManager.start()
smtp_client.mainloop()
del smtp_client # The connection manager will detect this and stop running
My question is:
Is this a good practice or a terrible waste of resources? Is there a better way to do this because no matter what I tried, this was the only solution that worked.
From what I know the try_connect() function creates a completely new smtp object each time it is run ( which is once in 0.6 seconds! )
Resources/observations
The project on git: https://github.com/cernyd/smtp_client
Observation: the timeout parameter when creating the smtp object improved response times drastically, why is that so?
Related
I'm working on an asynchronous communication script that will act as a middleman between a react native app and another agent. To do this I used python with DBUS to implement the communication between the two.
To implement this we created two processes one for the BLE and one for the communication with the agent. In cases where the agent replies immediately (with a non-blocking call) the communication always works as intended. For the case where we attach to a signal to continuously monitor an update status this error occurs most of the time at random points during the process:
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
I have tested both the BLE process and the agent process separately and they work as intended.
I'm currently suspecting that it could be related to messages "crashing" on the system bus or some race conditions but we are unsure how to validate that.
Any advice on what could be causing this issue or how I could avoid it?
For completeness I've attached a simplified version of the class that handles the communication with the agent.
import multiprocessing
from enum import Enum
import dbus
import dbus.mainloop.glib
from dbus.proxies import ProxyObject
from gi.repository import GLib
from omegaconf import DictConfig
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
class ClientUpdateStatus(Enum):
SUCCESS = 0
PENDING = 1
IN_PROGRESS = 2
FAILED = 3
class DBUSManager:
GLIB_LOOP = GLib.MainLoop()
COMMUNICATION_QUEUE = multiprocessing.Queue()
def __init__(self, config: DictConfig) -> None:
dbus_system_bus = dbus.SystemBus()
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
self._config = config
self._dbus_object = dbus_system_bus.get_object(self._config['dbus_interface'],
self._config['dbus_object_path'], introspect=False)
def get_version(self) -> str:
version = self._dbus_object.GetVersion("clientSimulator", dbus_interface=self._config['dbus_interface'])
return version
def check_for_update(self) -> str:
update_version = self._dbus_object.CheckForUpdate("clientSimulator",
dbus_interface=self._config['dbus_interface'])
return update_version
def run_update(self) -> ClientUpdateStatus:
raw_status = self._dbus_object.ExecuteUpdate(dbus_interface=self._config['dbus_interface'])
update_status = ClientUpdateStatus(raw_status)
# Launch listening process
signal_update_proc = multiprocessing.Process(target=DBUSManager.start_listener_process,
args=(self._dbus_object, self._config['dbus_interface'],))
signal_update_proc.start()
while True:
raw_status = DBUSManager.COMMUNICATION_QUEUE.get()
update_status = ClientUpdateStatus(raw_status)
if ClientUpdateStatus.SUCCESS == update_status:
break
signal_update_proc.join()
return update_status
#staticmethod
def start_listener_process(dbus_object: ProxyObject, dbus_interface: str) -> None:
dbus_object.connect_to_signal("UpdateStatusChanged", DBUSManager.status_change_handler,
dbus_interface=dbus_interface)
# Launch loop to acquire signals
DBUSManager.GLIB_LOOP.run() # This listening loop exits on GLIB_LOOP.quit()
#staticmethod
def status_change_handler(new_status: int) -> None:
DBUSManager.COMMUNICATION_QUEUE.put(new_status)
if ClientUpdateStatus.SUCCESS == ClientUpdateStatus(new_status):
DBUSManager.GLIB_LOOP.quit()
At this stage I would recommend to do a dbus-monitor to see if agent and BLE are reacting to requests properly.
Maybe to help others in the future I haven't found the solution but at least a way to work around it. We have encountered that problem at various places in our project and what generally helped, don't ask me why, was to always re-instantiate all needed dbus objects. So instead of having a single class that has a system bus variable self._system_bus = dbus.SystemBus() or a interface variable self._manager_interface = dbus.Interface(proxy_object, "org.freedesktop.DBus.ObjectManager") we would always re-instiate them.
If somebody knows what the problem is I'm happy to hear it.
I am trying to connect with IB Api to download some historical data. I have noticed that my client connects to the API, but then disconnects automatically in a very small period (~a few seconds).
Here's the log in the server:
socket connection for client{10} has closed.
Connection terminated.
Here's my main code for starting the app:
class TestApp(TestWrapper, TestClient):
def __init__(self):
TestWrapper.__init__(self)
TestClient.__init__(self, wrapper=self)
self.connect(config.ib_hostname, config.ib_port, config.ib_session_id)
self.session_id = int(config.ib_session_id)
self.thread = Thread(target = self.run)
self.thread.start()
setattr(self, "_thread", self.thread)
self.init_error()
def reset_connection(self):
pass
def check_contract(self, name, exchange_name, security_type, currency):
self.reset_connection()
ibcontract = IBcontract()
ibcontract.secType = security_type
ibcontract.symbol = name
ibcontract.exchange = exchange_name
ibcontract.currency = currency
return self.resolve_ib_contract(ibcontract)
def resolve_contract(self, security):
self.reset_connection()
ibcontract = IBcontract()
ibcontract.secType = security.security_type()
ibcontract.symbol=security.name()
ibcontract.exchange=security.exchange()
ibcontract.currency = security.currency()
return self.resolve_ib_contract(ibcontract)
def get_historical_data(self, security, duration, bar_size, what_to_show):
self.reset_connection()
resolved_ibcontract=self.resolve_contract(security)
data = test_app.get_IB_historical_data(resolved_ibcontract.contract, duration, bar_size, what_to_show)
return data
def create_app():
test_app = TestApp()
return test_app
Any suggestions on what could be the problem? I can show more error messages from the debug if needed.
If you can connect without issue only by changing the client ID, typically that indicates that the previous connection was not properly closed and TWS thinks its still open. To disconnect an API client you should call the EClient.disconnect function explicity, overridden in your example as:
test_app.disconnect()
Though its not necessary to disconnect/reconnect after every task, and you can just leave the connection open for extended periods.
You may sometimes encounter problems if an API function, such as reqHistoricalData, is called immediately after connection. Its best to have a small pause after initiating a connection to wait for a callback such as nextValidID to ensure the connection is complete before proceeding.
http://interactivebrokers.github.io/tws-api/connection.html#connect
I'm not sure what the function init_error() is intended for in your example since it would always be called when a TestApp object is created (whether or not there is an error).
Installing the latest version of TWS API (v 9.76) solved the problem.
https://interactivebrokers.github.io/#
Edit:
The main issue is the 3rd party rabbitmq machine seems to kill idle connections every now and then. That's when I start getting "Broken Pipe" exceptions. The only way to gets comms. back to normal is for me to kill the processes and restart them. I assume there's a better way?
--
I'm a little lost here. I am connecting to a 3rd party RabbitMQ server to push messages to. Every now and then all the sockets on their machine gets dropped and I end up getting a "Broken Pipe" exception.
I've been told to implement a heartbeat check in my code but I'm not sure how exactly. I've found some info here: http://kombu.readthedocs.org/en/latest/changelog.html#version-2-3-0 but no real example code.
Do I only need to add "?heartbeat=x" to the connection string? Does Kombu do the rest? I see I need to call "Connection.heartbeat_check()" at "x/2". Should I create a periodic task to call this? How does the connection get re-established?
I'm using:
celery==3.0.12
kombu==2.5.4
My code looks like this right now. A simple Celery task gets called to send the message through to the 3rd party RabbitMQ server (removed logging and comments to keep it short, basic enough):
class SendMessageTask(Task):
name = "campaign.backends.send"
routing_key = "campaign.backends.send"
ignore_result = True
default_retry_delay = 60 # 1 minute.
max_retries = 5
def run(self, send_to, message, **kwargs):
payload = "Testing message"
try:
conn = BrokerConnection(
hostname=HOSTNAME,
port=PORT,
userid=USER_ID,
password=PASSWORD,
virtual_host=VHOST
)
with producers[conn].acquire(block=True) as producer:
publish = conn.ensure(producer, producer.publish, errback=sending_errback, max_retries=3)
publish(
body=payload,
routing_key=OUT_ROUTING_KEY,
delivery_mode=2,
exchange=EXCHANGE,
serializer=None,
content_type='text/xml',
content_encoding = 'utf-8'
)
except Exception, ex:
print ex
Thanks for any and all help.
While you certainly can add heartbeat support to a producer, it makes more sense for consumer processes.
Enabling heartbeats means that you have to send heartbeats regularly, e.g. if the heartbeat is set to 1 second, then you have to send a heartbeat every second or more or the remote will close the connection.
This means that you have to use a separate thread or use async io to reliably send heartbeats in time, and since a connection cannot be shared between threads this leaves us with async io.
The good news is that you probably won't get much benefit adding heartbeats to a produce-only connection.
I have a tornado web server, something like :
app = tornado.web.Application(handlersList,log_function=printIt)
app.listen(port)
serverInstance = tornado.ioloop.IOLoop.instance()
serverInstance.start()
the handlers are made with tornado.web.RequestHandler .
When I run the server on freeBSD, sometimes the page/resource takes long time to load,
trying to debug I see that when waiting for page to load Tornado didn't create a request object yet, and looking at netstat results, I see a lot of connection with status ESTABLISHED.
So my thought is that there are too many not closed connection, and operating system refuse new connection coming from the same session.
Can this be the case?
I don't do anything in get,post functions after writing, should I somehow shutdown/close the connection before returning?
EDIT 1: get/post are synchronous (no #asynchronous)
EDIT 2: temporarily fixed by forcing no_keep_alive
class BasicFeedHandler(tornado.web.RequestHandler):
def finish(self, chunk=None):
self.request.connection.no_keep_alive = True
tornado.web.RequestHandler.finish(self, chunk)
I'm not sure if keep_alive connections should stay open so long after client closed connection, any way this workaround works.
I found how to do this by looking at HTTPConnection._finish_request, when no keep-alive
this line self.stream.read_until(b("\r\n\r\n"), self._header_callback) runs.
what is \r\n\r\n in this context ?
Try this:
class Application(tornado.web.Application):
def __init__(self):
...
http_server = tornado.httpserver.HTTPServer(Application(),no_keep_alive=True)
http_server.listen(port)
tornado.ioloop.IOLoop.instance().start()
I am working on a web service with Twisted that is responsible for calling up several packages I had previously used on the command line. The routines these packages handle were being prototyped on their own but now are ready to be integrated into our webservice.
In short, I have several different modules that all create a mysql connection property internally in their original command line forms. Take this for example:
class searcher:
def __init__(self,lat,lon,radius):
self.conn = getConnection()[1]
self.con=self.conn.cursor();
self.mgo = getConnection(True)
self.lat = lat
self.lon = lon
self.radius = radius
self.profsinrange()
self.cache = memcache.Client(["173.220.194.84:11211"])
The getConnection function is just a helper that returns a mongo or mysql cursor respectively. Again, this is all prototypical :)
The problem I am experiencing is when implemented as a consistently running server using Twisted's WSGI resource, the sql connection created in init times out, and subsequent requests don't seem to regenerate it. Example code for small server app:
from twisted.web import server
from twisted.web.wsgi import WSGIResource
from twisted.python.threadpool import ThreadPool
from twisted.internet import reactor
from twisted.application import service, strports
import cgi
import gnengine
import nn
wsgiThreadPool = ThreadPool()
wsgiThreadPool.start()
# ensuring that it will be stopped when the reactor shuts down
reactor.addSystemEventTrigger('after', 'shutdown', wsgiThreadPool.stop)
def application(environ, start_response):
start_response('200 OK', [('Content-type','text/plain')])
params = cgi.parse_qs(environ['QUERY_STRING'])
try:
lat = float(params['lat'][0])
lon = float(params['lon'][0])
radius = int(params['radius'][0])
query_terms = params['query']
s = gnengine.searcher(lat,lon,radius)
query_terms = ' '.join( query_terms )
json = s.query(query_terms)
return [json]
except Exception, e:
return [str(e),str(params)]
return ['error']
wsgiAppAsResource = WSGIResource(reactor, wsgiThreadPool, application)
# Hooks for twistd
application = service.Application('Twisted.web.wsgi Hello World Example')
server = strports.service('tcp:8080', server.Site(wsgiAppAsResource))
server.setServiceParent(application)
The first few requests work fine, but after mysqls wait_timeout expires, the dread error 2006 "Mysql has gone away" error surfaces. It had been my understanding that every request to the WSGI Twisted resource would run the application function, thereby regenerating the searcher object and re-leasing the connection. If this isn't the case, how can I make the requests processed as such? Is this kind of Twisted deployment not transactional in this sense? Thanks!
EDIT: Per request, here is the prototype helper function calling up the connection:
def getConnection(mong = False):
if mong == False:
connection = mysql.connect(host = db_host,
user = db_user,
passwd = db_pass,
db = db,
cursorclass=mysql.cursors.DictCursor)
cur = connection.cursor();
return (cur,connection)
else:
return pymongo.Connection('173.220.194.84',27017).gonation_test
i was developing a piece of software with twisted where i had to utilize a constant MySQL database connection. i did run into this problem and digging through the twisted documentation extensively and posting a few questions i was unable to find a proper solution.There is a boolean parameter you can pass when you are instantiating the adbapi.connectionPool class; however it never seemed to work and i kept getting the error irregardless. However, what i am guessing the reconnect boolean represents is the destruction of the connection object when SQL disconnect does occur.
adbapi.ConnectionPool("MySQLdb", cp_reconnect=True, host="", user="", passwd="", db="")
I have not tested this but i will re-post some results when i do or if anyone else has please share.
When i was developing the script i was using twisted 8.2.0 (i havent touched twisted in a while) and back then the framework had no such explicit keep alive method, so i developed a ping/keepalive extension employing event driven paradigm twisted builds upon in conjunction with direct MySQLdb module ping() method (see code comment).
As i was typing this response; however, i did look around the current twisted documentation i was still unable to find an explicit keep-alive method or parameter. My guess is because twisted itself does not have database connectivity libraries/classes. It uses the methods available to python and provides an indirect layer of interfacing with those modules; with some exposure for direct calls to the database library being used. This is accomplished by using the adbapi.runWithConnection method.
here is the module i wrote under twisted 8.2.0 and python 2.6; you can set the intervals between pings. what the script does is, every 20 minutes it pings the database and if it fails, it attempts to reconnect back to it every 60 seconds. I must warn that the script does NOT handle sudden/dropped connection; that you can handle through addErrback whenever you run a query through twisted, atleast thats how i did it. I have noticed that whenever database connection drops, you can only find out if it has when you are executing a query and the event raises an errback, and then at that point you deal with it. Basically, if i dont run a query for 10 minutes, and my database disconnects me, my application will not respond in real time. the application will realize the connection has been dropped when it runs the query that follows; so the database could have disconnected us 1 minute after the first query, 5, 9, etc....
I guess this sort of goes back to the original idea that i have stated, twisted utilizes python's own libraries or 3rd party libraries for database connectivity and because of that, some things are handled a bit differently.
from twisted.enterprise import adbapi
from twisted.internet import reactor, defer, task
class sqlClass:
def __init__(self, db_pointer):
self.dbpool=db_pointer
self.dbping = task.LoopingCall(self.dbping)
self.dbping.start(1200) #20 minutes = 1200 seconds; i found out that if MySQL socket is idled for 20 minutes or longer, MySQL itself disconnects the session for security reasons; i do believe you can change that in the configuration of the database server itself but it may not be recommended.
self.reconnect=False
print "database ping initiated"
def dbping(self):
def ping(conn):
conn.ping() #what happens here is that twisted allows us to access methods from the MySQLdb module that python posesses; i chose to use the native command instead of sending null commands to the database.
pingdb=self.dbpool.runWithConnection(ping)
pingdb.addCallback(self.dbactive)
pingdb.addErrback(self.dbout)
print "pinging database"
def dbactive(self, data):
if data==None and self.reconnect==True:
self.dbping.stop()
self.reconnect=False
self.dbping.start(1200) #20 minutes = 1200 seconds
print "Reconnected to database!"
elif data==None:
print "database is active"
def dbout(self, deferr):
#print deferr
if self.reconnect==False:
self.dbreconnect()
elif self.reconnect==True:
print "Unable to reconnect to database"
print "unable to ping MySQL database!"
def dbreconnect(self, *data):
self.dbping.stop()
self.reconnect=True
#self.dbping = task.LoopingCall(self.dbping)
self.dbping.start(60) #60
if __name__ == "__main__":
db = sqlClass(adbapi.ConnectionPool("MySQLdb", cp_reconnect=True, host="", user="", passwd="", db=""))
reactor.callLater(2, db.dbping)
reactor.run()
let me know how it works out for you :)