Get mailbox (yahoo) changes faster - python

Implementing an email client (to Yahoo server) in Python using tkinter. Very basic functionality, browse folders, messages in the selected folder, new, forward, reply, delete message. At present it is too slow (takes too much time to see the changes done remotedly). My Yahoo mailbox has ~170 messages.
To approach the problem created scripts fetch_idle.py, fetch_poll.py (below)
Looks like neither Yahoo, nor Gmail supports IDLE command. The fetch_idle.py script:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Fetch unseen messages in a mailbox. Use imap_tools. It turned out
that neither yahoo, nor gmail support IDLE command"""
import os
import ssl
from imap_tools import MailBox, A
import conf
if __name__ == '__main__':
args = conf.parser.parse_args()
host, port, env_var = conf.config[args.host]
if 0 < args.verbose:
print(host, port, env_var)
user, pass_ = os.getenv('USER_NAME_EMAIL'), os.getenv(env_var)
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
# ctx.options &= ~ssl.OP_NO_SSLv3
with MailBox(host=host, port=port, ssl_context=ctx) as mbox:
# Looks neither Yahoo, nor Gmail supoort IDLE
data = mbox.idle.wait(timeout=60)
if data:
for msg in mbox.fetch(A(seen=False)):
print(msg.date, msg.subject)
else:
print('no updates in 60 sec')
gives the following errors accordingly (for yahoo and gmail):
imap_tools.errors.MailboxTaggedResponseError: Response status "None" expected, but "b'IIDE1 BAD [CLIENTBUG] ID Command arguments invalid'" received. Data: IDLE start
imap_tools.errors.MailboxTaggedResponseError: Response status "None" expected, but "b'GONM1 BAD Unknown command s19mb13629058ljg'" received. Data: IDLE start
Resorted to reading all uids in the mailbox, getting the difference (new - old), and thus getting known what has changed. To learn I use the fetch_poll.py script:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Get all uids in the mailbox. Wait 60 secs. Fetch uids again. Print
the changes. Repeat"""
import os
import ssl
from threading import Thread, Condition, Event
import time
import imaplib
import imap_tools
# from progress.bar import Bar
# from imap_tools import MailBox
import conf
all_uids = deleted_uids = new_uids = set()
POLL_INTERVAL = 10
request_to_terminate = Event()
def fetch_uids(mbox, cv):
mbox.folder.set(mbox.folder.get())
try:
uids = [int(i.uid) for i in mbox.fetch(headers_only=1, bulk=1)]
except (imaplib.IMAP4.error, imap_tools.errors.MailboxFetchError):
uids = []
return uids
def update_uids(mbox, cv):
global all_uids, deleted_uids, new_uids
while True:
if request_to_terminate.is_set():
break
with cv:
start = time.perf_counter()
uids = set(fetch_uids(mbox, cv))
print(f'Fetching {len(uids)} uids '
f'took {time.perf_counter() - start} secs')
new_uids = uids - all_uids
deleted_uids = all_uids - uids
all_uids = uids
if deleted_uids or new_uids:
cv.notify()
time.sleep(POLL_INTERVAL)
if __name__ == '__main__':
cv = Condition()
args = conf.parser.parse_args()
host, port, env_var = conf.config[args.host]
user, pass_ = os.getenv('USER_NAME_EMAIL'), os.getenv(env_var)
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ctx.options &= ~ssl.OP_NO_SSLv3
with imap_tools.MailBox(host=host, port=port, ssl_context=ctx) as mbox:
mbox.login(user, pass_, initial_folder='INBOX')
all_uids = set()
uids_delta = set()
update_thread = Thread(target=update_uids, args=(mbox, cv),
daemon=True)
update_thread.start()
while True:
try:
with cv:
while not (deleted_uids or new_uids):
cv.wait()
if deleted_uids:
print(f'deleted_uids = {deleted_uids}')
if new_uids:
print(f'new_uids = {new_uids}')
deleted_uids = set()
new_uids = set()
except KeyboardInterrupt:
ans = input('Add marker/terminate [M/t] ?')
if ans in ['', 'm', 'M']: # write marker
continue
else:
request_to_terminate.set()
update_thread.join()
break
The script takes from 10 to 30 seconds to fetch all uids (in fetch_uids function).
Experimented with Debian Evolution (3.38) and macOS High Sierra (10.13.6) Mail (11.6). Evolution sees the changes instantly (I need more time to press File > Send/Receive > Send/Receive F12, than Evolution to get the changes). For macOS I need to Mailbox > Get New Mail, to get the new mail. It is equally fast. I deleted some old messages to see how quick the clients will see the deletion. Again, explicitly doing the mentioned commands give the quick result.
I created/deleted messages using https://mail.yahoo.com
How to speed up my script and see the changes made elsewhere quicker
than in 15 (avg) seconds?
Python 3.9.2, Debian GNU/Linux 11 (bullseye)
UPDATE
Credit to #Max who suggested the solution in comments. New much faster version of fetch_uids(), only 2 secs against 15
def fetch_uids(mbox, cv):
mbox.folder.set(mbox.folder.get())
try:
uids = map(int, mbox.uids('ALL'))
except (imaplib.IMAP4.error, imap_tools.errors.MailboxFetchError):
uids = []
return uids

Related

Problem opening same port multiple times in python Digi-XBee API

I'm trying to understand why I'm not able to open multiple times the same serial port with Digi-Xbee (import digi.xbee) API for python while with Xbee (import xbee) API for python I can.
When I run the code bellow the exception digi.xbee.exception.InvalidOperatingModeException: Could not determine operating mode is raised.
from digi.xbee.devices import *
import time
import codecs
class start(object):
while True:
xbeeApi2 = DigiMeshDevice(port='/dev/ttyUSB0', baud_rate=9600)
xbeeApi2.open()
time.sleep(0.5)
message = xbeeApi2.read_data(timeout=None)
if message is not None:
print(codecs.decode(message.data, 'utf-8'))
time.sleep(1)
XBee module is a S2C (XB24C) set as Digimesh 2.4 TH, firmware 9002 (newest) with a USB Dongle.
Python is 3.7 & my host hardware is a Raspberry Pi 3 B+ running Debian.
Any help would be appreciated.
EDIT 1
Exception is raised when, for the second time, {xbeeApi2.open()} is executed.
In fact, my original code has multiple threads that import the class where the port is opened, many times before the previous thread had the chance to close it.
The 'original' piece of code, that runs fine is bellow:
from xbee import ZigBee
import serial
import time
class start(object):
while True:
ser = serial.Serial('/dev/ttyUSB0', 9600)
xbeeApi2 = ZigBee(ser, escaped=True) # S2 e S2C
time.sleep(0.5)
message = ''
try:
message = xbeeApi2.wait_read_frame(timeout=0.5)
except:
pass #timeout exception
if message is not None:
print(message)
time.sleep(1)
Well, you aren't closing it. Why not create the device and open it before your while True loop?
And you could configure it to just sleep for 0.1 seconds when message is None to reduce processor load. You'll be able to keep up with the message queue that way -- if there was a message, you want to immediately check for another queued message before doing any sleeping.
from digi.xbee.devices import *
import time
import codecs
class start(object):
xbeeApi2 = DigiMeshDevice(port='/dev/ttyUSB0', baud_rate=9600)
xbeeApi2.open()
while True:
message = xbeeApi2.read_data(timeout=None)
if message is not None:
print(codecs.decode(message.data, 'utf-8'))
else:
time.sleep(0.1)

APScheduler doesn't fire the job after waking up from system sleep

I wrote a simple python script scheduled to send me a message once every month. I run the script script on my macbook which is mostly in sleep mode during the day (I never shut down my laptop).
However, it didn't execute the job when I woke my laptop up after the time for the job has passed (of course, the duration is within the period of my misfire_grace_time as I deliberately set the misfire_grace_time to be particularly long enough to cover a 1-month sleep).
Here is my script:
# Define function to check internet connection
try:
import httplib
except:
import http.client as httplib
def have_internet():
conn = httplib.HTTPConnection("www.google.com", timeout=5)
try:
conn.request("HEAD", "/")
conn.close()
return True
except:
conn.close()
return False
# =============================================================================
# songline is a module for sending a message to my mobile phone via "LINE application"
import songline
token = '.......this is my API token.......'
messenger = songline.Sendline(token)
msg = 'Checked'
# =============================================================================
import time
from apscheduler.schedulers.background import BackgroundScheduler
sched = BackgroundScheduler()
def job_check():
if(have_internet()):
messenger.sendtext(msg)
else:
time.sleep(60)
job_check()
sched.add_job(job_check, trigger = "cron", day = 1, hour = 20, misfire_grace_time=2592000)
sched.start()
while True:
time.sleep(1)
I ran this python script by using command line:
nohup python /path/to/script.py &
Thank you in advance for all suggestions! I'm very new to python scripting and APScheduler.
I got same problem. That is happens because next_wakeup_time is not updated in db when PC sleeping.
First what i did it caught the problem like this:
def enshure_scheduled_jobs(scheduler, logger:logging.Logger):
dt_now = datetime.now(tz=get_localzone())
jobstore = 'default'
missed_jobs = []
for j for j in scheduler.get_jobs(jobstore=jobstore):
if ( dt_now - j.next_run_time ).total_seconds() > j.misfire_grace_time :
missed_job.append(j)
logger.warning(f'Found missed job {j.id} [{j.next_run_time}] < now is:[{dt_now}]')
...
scheduler.start()
while True:
time.sleep(60)
missed_jobs = enshure_scheduled_jobs(scheduler, logger)
And after that i think all is need is reschedule job by calling scheduler.reschedule_job() apscheduler.schedulers.base.BaseScheduler.reschedule_job.
Or something to recalc next_job_time
scheduler.pause_job(j.id)
scheduler.resume_job(j.id)

Missing packages with python-xbee library and raspberry

I'm trying to send some packages from an MSP to a Raspberry Pi 3 through 2 Xbee S1 modules.
Xbee are configured as DigiMesh 2.4 with escaped frames, one as router and other as coordinator. At the raspberry the connection is made with USB dongle.
My code, at MSP, sends a package at every 10 us with CTS flow control. When the coordinator is plugged into my PC running windows, I can see, through XCTU, all packages arriving, everything just fine!!!
But, when the dongle is at raspberry, running Raspbian and the following code, some packages can not arrive.
As everything works properly with XCTU, the problem resides in the code, probably handling serial port or anything similar to that.
So, any help would be much appreciate!!!
start.py:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# copyright: Thiago Cruz <thiagoalberto#gmail.com>
import sys
import os
from PyQt4 import QtGui
from middleware.QueueMiddleware import QueueMiddleware
from jobs.ScheduleJob import ScheduleJob
def startQueue():
queue = QueueMiddleware()
queue.start()
def assyncSchedule():
schedule = ScheduleJob()
schedule.run()
def runApp():
startQueue()
app = QtGui.QApplication(sys.argv)
sys.exit(app.exec_())
if __name__ == '__main__':
runApp()
QueueMiddleware.py:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# copyright: Thiago Cruz <thiagoalberto#gmail.com>
import threading
import time
import serial
import Queue
from middleware.DataProcessorGear import DataProcessorGear
from xbee import ZigBee
minutes = 0
class QueueMiddleware(threading.Thread):
__instance = None
PORT = '/dev/ttyUSB0'
BAUD_RATE = 9600
# The XBee addresses I'm dealing with
BROADCAST = '\x00\x00\x00\x00\x00\x00\xFF\xFF'
UNKNOWN = '\xFF\xFE' # This is the 'I don't know' 16 bit address
def __new__(cls):
if QueueMiddleware.__instance is None:
QueueMiddleware.__instance = super(QueueMiddleware, cls).__new__(cls)
return QueueMiddleware.__instance
def __init__(self):
QueueMiddleware.__instance = self
threading.Thread.__init__(self)
self.dataPacketsQueue = Queue.Queue()
# Create API object, which spawns a new thread
self.ser = serial.Serial(
port='/dev/ttyUSB0',
baudrate = 9600,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
timeout=1
)
self.xbeeApi = ZigBee(self.ser, callback=self.message_received, escaped=True)
print 'start queue'
def __del__(self):
# halt() must be called before closing the serial
# port in order to ensure proper thread shutdown
self.xbeeApi.halt()
self.ser.close()
self.processor = None
def run(self):
# Do other stuff in the main thread
while True:
try:
time.sleep(1)
#if self.dataPacketsQueue.qsize() > 0:
# lock = threading.Lock()
# processor = DataProcessorGear(self.dataPacketsQueue, lock)
# processor.start()
except KeyboardInterrupt:
break
def message_received(self, data):
global minutes
minutes += 1
print minutes
self.dataPacketsQueue.put(data, block=True, timeout=None)
I already tried to change the values of time.sleep() and have suppressed the execution of the subsequent threads to "isolate" the problem.
My console displays values from ~120 to ~170. MSP only sends 200 data packages!!
So... any guesses ??
Thanks in advance.
Enable hardware flow control and change the baudrate from 9600 to 115200. You'll have to update your XBee module configuration for the new baudrate, but you'll stand a chance of getting your packets through.
I assume you meant to write that you're sending packets every 10ms instead of every 10us. At 10ms/packet, you're at 100 packets/second. 9600 baud is only about 960 characters per second, and your packets are surely larger than 9 characters with the API overhead.
Kind of a solution.....
After trying different approaches in my code, even tried the script below...
#!/usr/bin/python
import serial
serialport = serial.Serial("/dev/ttyUSB0", 115200, timeout=None,rtscts=True,dsrdtr=True)
while True:
serialport.flush()
command = serialport.readline()
print str(command).encode('hex')
I was able to get a desired behavior by changing the XBee MR parameter (Mesh unicast retries - The maximum number of network packet delivery attempts) to the maximum (0x7), none of my packages are lost, even for a delay of 0s (zero) between the transmission of each package.
Probably, as said by tomlogic, if I run my code on a faster PC, I believe, I'll get my packages.
When I do this test, i'll post the results here.
Thanks.

Unable to push large metrics (500+) within a min to graphite via python (Socket)

I have 5 MySQL slaves and 1 master, for each machine I'm collecting stats and pushing to graphite with python program. The script runs 6 threads (for each machine) and sends to the graphite through socket. I'm very clear that, I'm unable to push more than 70 stats at a time. Here's the program.
ThreadSocketClient.py
from threading import Thread, current_thread
import socket
import MySQLdb as Database
import logging
import sys
import time
from mysqlStats import status_keys
from mysqlStats import GRAPHITE_HOST, GRAPHITE_PORT
class MysqlGraphiteUtil(Thread):
def __init__(self,host,port,user,password):
Thread.__init__(self)
print current_thread
self.host=host
self.port=port
self.user=user
self.password=password
self.sock = socket.socket()
self.sock.connect((GRAPHITE_HOST, GRAPHITE_PORT))
def run(self):
self.connectDB()
self.showStatus()
self.sock.close()
def connectDB(self):
try:
self.db = Database.connect(
host=self.host,
port=self.port,
user=self.user,
passwd=self.password
)
except Exception, err:
logging.exception(err)
print err
sys.exit()
return self.db,self.host
def showStatus(self):
self.cursor = self.db.cursor()
self.cursor.execute("SHOW GLOBAL STATUS")
data = self.cursor.fetchall()
self.stats = dict()
for key, value in data:
if key in status_keys:
self.stats[key] = value
self.db.close()
self.push_data(self.stats, self.host)
def push_data(self,stats, sname):
for key, val in stats.iteritems():
if val.isdigit():
short = sname.replace('.', '_')
message = 'mysql.' + short + '.' + key + ' ' + val + ' ' + '%d \n' % int(time.time())
self.sock.send(message)
self.sock.close()
The problem lies around in push_data(), when I put time.sleep(0.30) after self.sock.close() it's sending all the data but it's taking more than a minute (approx 2 min). It's of no use, since I'm collecting the metric per minute. Please help how can I speedup the socket connect and sending.
PS: Pls let me know if any details required from the supporting files, I shall post it.
I verified with tcpdumpthat whether the server is consuming all the packets/data which I'm sending. It does as expected. So, confirmed that socket was not the problem.
As the next check, I wanted to completely verify the carbon.conf and found that MAX_QUEUE_SIZE was 500, increased that to 1000, but still there was discrepancy in total sent and total received.
Also found that MAX_CREATES_PER_MINUTE=100, I changed that to 1000 and it has worked !! This is the value which instructs graphite to create the max no. of whisper files per minute.

Reconnecting to ZMQ feed after disconnect

I have this simple python script which connects to a ZMQ feed and spits out some data:
#!/usr/bin/env python2
import zlib
import zmq
import simplejson
def main():
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
# Connect to the first publicly available relay.
subscriber.connect('tcp://relay-us-east-1.eve-emdr.com:8050')
# Disable filtering.
subscriber.setsockopt(zmq.SUBSCRIBE, "")
while True:
# Receive raw market JSON strings.
market_json = zlib.decompress(subscriber.recv())
# Un-serialize the JSON data to a Python dict.
market_data = simplejson.loads(market_json)
# Dump typeID
results = rowsets = market_data.get('rowsets')[0];
print results['typeID']
if __name__ == '__main__':
main()
This is running on my home server. Sometimes, my home server loses connectivity to the internet, the curse of being a residential connection. When the network does drop out and come back on, however, the script stalls. Is there any way to reinitialize connection? I'm still new to python, a point in the right direction would be wonderful. =)
Not sure this is still relevant, but here goes:
Use a timeout (examples here, here and here). On ZMQ < 3.0 it would look something like this (not tested):
#!/usr/bin/env python2
import zlib
import zmq
import simplejson
def main():
context = zmq.Context()
while True:
subscriber = context.socket(zmq.SUB)
# Connect to the first publicly available relay.
subscriber.connect('tcp://relay-us-east-1.eve-emdr.com:8050')
# Disable filtering.
subscriber.setsockopt(zmq.SUBSCRIBE, "")
this_call_blocks_until_timeout = recv_or_timeout(subscriber, 60000)
print 'Timeout'
subscriber.close()
def recv_or_timeout(subscriber, timeout_ms)
poller = zmq.Poller()
poller.register(subscriber, zmq.POLLIN)
while True:
socket = dict(self._poller.poll(stimeout_ms))
if socket.get(subscriber) == zmq.POLLIN:
# Receive raw market JSON strings.
market_json = zlib.decompress(subscriber.recv())
# Un-serialize the JSON data to a Python dict.
market_data = simplejson.loads(market_json)
# Dump typeID
results = rowsets = market_data.get('rowsets')[0];
print results['typeID']
else:
# Timeout!
return
if __name__ == '__main__':
main()
ZMQ > 3.0 allows you to set the socket's RCVTIMEO option, which will cause its recv() to raise a timeout error, without the need of a Poller object.

Categories

Resources