How to (properly) update a worker QThread with new connection settings? - python

Suppose I have a QThread for running a plugin in my app and the plugin connects to a server specified by the user. When the user changes the server settings, the plugin should connect to the new server - as expected. Would it be a good idea to simply terminate the current plugin worker thread and spin up a new one when the user updates the settings?
This is what I've got at the moment
class MainWindow(QMainWindow):
def __init__(self):
# ...
self.hostname.editingFinished.connect(
lambda: self._setup_new_server() or
self._restart_plugin_work_thread()
)
self.port.editingFinished.connect(
lambda: self._setup_new_server() or
self._restart_plugin_work_thread()
)
def _create_plugin_worker_thread(self):
self.plugin_worker_thread = QtCore.QThread()
self.plugin_worker = PluginWorker()
self.plugin_worker.moveToThread(self.plugin_worker_thread)
self.plugin_worker_thread.start()
self.plugin_worker.run_plugin_signal.connect(self.plugin_worker.run_plugin)
self.plugin_worker.stop_plugin_signal.connect(self.plugin_worker.stop_run_plugin)
def _terminate_plugin_worker_thread(self):
self.plugin_worker_thread.terminate()
def _restart_plugin_work_thread(self):
# terminate the current worker thread
self._terminate_plugin_worker_thread()
# create a new worker thread
self._create_plugin_worker_thread()
class PluginWorker(QtCore.QObject):
run_plugin_signal = QtCore.Signal(str, int, str, str)
stop_plugin_signal = QtCore.Signal()
# ...
PluginWorker is the worker class which mostly relies on a QTimer that triggers the plugin's execution method every 2 seconds.
Any help will be appreciated. Thanks.

There are several solutions. Though I haven't worked with QT.
If it was my code I would kill the thread and start a new one with the updated connection settings.
At thread construction pass a threading.Event object to the thread and also hold a reference from the main thread. When the connection string is updated set the event and create a new thread (passing a new event object). Within the thread function return if the event has been set.

Related

python : dynamically spawn multithread workers with flask-socket io and python-binance

Hello fellow developers,
I'm actually trying to create a small webapp that would allow me to monitor multiple binance accounts from a dashboard and maybe in the futur perform some small automatic trading actions.
My frontend is implemented with Vue+quasar and my backend server is based on python Flask for the REST api.
What I would like to do is being able to start a background process dynamically when a specific endpoint of my server is called. Once this process is started on the server, I would like it to communicate via websocket with my Vue client.
Right now I can spawn the worker and create the websocket communication, but somehow, I can't figure out how to make all the threads in my worker to work all together. Let me get a bit more specific:
Once my worker is started, I'm trying to create at least two threads. One is the infinite loop allowing me to automate some small actions and the other one is the flask-socketio server that will handle the sockets connections. Here is the code of that worker :
customWorker.py
import time
from flask import Flask
from flask_socketio import SocketIO, send, emit
import threading
import json
import eventlet
# custom class allowing me to communicate with my mongoDD
from db_wrap import DbWrap
from binance.client import Client
from binance.exceptions import BinanceAPIException, BinanceWithdrawException, BinanceRequestException
from binance.websockets import BinanceSocketManager
def process_message(msg):
print('got a websocket message')
print(msg)
class customWorker:
def __init__(self, workerId, sleepTime, dbWrap):
self.workerId = workerId
self.sleepTime = sleepTime
self.socketio = None
self.dbWrap = DbWrap()
# this retrieves worker configuration from database
self.config = json.loads(self.dbWrap.get_worker(workerId))
keys = self.dbWrap.get_worker_keys(workerId)
self.binanceClient = Client(keys['apiKey'], keys['apiSecret'])
def handle_message(self, data):
print ('My PID is {} and I received {}'.format(os.getpid(), data))
send(os.getpid())
def init_websocket_server(self):
app = Flask(__name__)
socketio = SocketIO(app, async_mode='eventlet', logger=True, engineio_logger=True, cors_allowed_origins="*")
eventlet.monkey_patch()
socketio.on_event('message', self.handle_message)
self.socketio = socketio
self.app = app
def launch_main_thread(self):
while True:
print('My PID is {} and workerId {}'
.format(os.getpid(), self.workerId))
if self.socketio is not None:
info = self.binanceClient.get_account()
self.socketio.emit('my_account', info, namespace='/')
def launch_worker(self):
self.init_websocket_server()
self.socketio.start_background_task(self.launch_main_thread)
self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False)
Once the REST endpoint is called, the worker is spawned by calling birth_worker() method of "Broker" object available within my server :
from custom_worker import customWorker
#...
def create_worker(self, workerid, sleepTime, dbWrap):
worker = customWorker(workerid, sleepTime, dbWrap)
worker.launch_worker()
def birth_worker(workerid, 5, dbwrap):
p = Process(target=self.create_worker, args=(workerid,10, botPipe, dbWrap))
p.start()
So when this is done, the worker is launched in a separate process that successfully creates threads and listens for socket connection. But my problem is that I can't use my binanceClient in my main thread. I think that it is using threads and the fact that I use eventlet and in particular the monkey_patch() function breaks it. When I try to call the binanceClient.get_account() method I get an error AttributeError: module 'select' has no attribute 'poll'
I'm pretty sure about that it comes from monkey_patch because if I use it in the init() method of my worker (before patching) it works and I can get the account info. So I guess there is a conflict here that I've been trying to resolve unsuccessfully.
I've tried using only the thread mode for my socket.io app by using async_mode=threading but then, my flask-socketio app won't start and listen for sockets as the line self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False) blocks everything
I'm pretty sure I have an architecture problem here and that I shouldn't start my app by launching socketio.run. I've been unable to start it with gunicorn for example because I need it to be dynamic and call it from my python scripts. I've been struggling to find the proper way to do this and that's why I'm here today.
Could someone please give me a hint on how is this supposed to be achieved ? How can I dynamically spawn a subprocess that will manage a socket server thread, an infinite loop thread and connections with binanceClient ? I've been roaming stack overflow without success, every advice is welcome, even an architecture reforge.
Here is my environnement:
Manjaro Linux 21.0.1
pip-chill:
eventlet==0.30.2
flask-cors==3.0.10
flask-socketio==5.0.1
pillow==8.2.0
pymongo==3.11.3
python-binance==0.7.11
websockets==8.1

Launching celery task_monitor in django

Looking at the celery docs i can see that the task monitor is launched in a script (see below). In an implementation of django (as is my understanding), this won't be the case, as (in my understanding) I'll have to launch the task monitor in a thread.
Currently I'm launching the monitor the first time i run a job, then checking its state each subsequent time i run a job (see further below). This seems like a bad way to do this.
My question is globally: What is the correct way to instantiate the task monitor for celery in a django project? but a good answer would include:
Is threading the accepted way to do this?
Should i launch this in a sub process
do i need to be worried about volume going through the task monitor (hence i should use threading)
Is there a standard, widely accepted way to do this?
It seems I'm missing something really obvious.
# docs example - not implemented like this in my project
from celery import Celery
def my_monitor(app):
state = app.events.State()
def announce_failed_tasks(event):
state.event(event)
# task name is sent only with -received event, and state
# will keep track of this for us.
task = state.tasks.get(event['uuid'])
print('TASK FAILED: %s[%s] %s' % (
task.name, task.uuid, task.info(),))
with app.connection() as connection:
recv = app.events.Receiver(connection, handlers={
'task-failed': announce_failed_tasks,
})
recv.capture(limit=None, timeout=None, wakeup=True)
if __name__ == '__main__':
app = Celery(broker='amqp://guest#localhost//')
# LAUNCHED HERE
my_monitor(app)
# my current implementation
# If the celery_monitor is not instantiated, set it up
app = Celery('scheduler',
broker=rabbit_url, # Rabbit-MQ
backend=redis_url, # Redis
include=tasks
)
celery_monitor = Thread(target=build_monitor, args=[app], name='monitor-global', daemon=True)
# import celery_monitor into another module
global celery_monitor
if not celery_monitor.is_alive():
try:
celery_monitor.start()
logger.debug('Celery Monitor - Thread Started (monitor-retry) ')
except RuntimeError as e: # occurs if thread is dead
# create new instance if thread is dead
logger.debug('Celery Monitor - Error restarting thread (monitor-rety): {}'.format(e))
celery_monitor = Thread(target=build_monitor, args=[app], name='monitor-retry', daemon=True)
celery_monitor.start() # start thread
logger.debug('Celery Monitor - Thread Re-Started (monitor-retry) ')
else:
logger.debug('Celery Monitor - Thread is already alive. Dont do anything.')

I am trying to run an endless worker thread (daemon) from within Django

I have a worker thread which only task is to query a list of active users every 10 minutes from the database, and to send them an SMS message if a certain condition is fulfilled (which is checked every minute); also the worker thread does not hinder the main application at all.
So far I managed to get the thread up and running and sending SMS works also just fine. However, for some reasons the thread stops/gets killed after some random time (hours). I run a try: except Exception as e: within a while True, to catch occurring errors. Additionally, I print out a messages saying what error occurred.
Well, I never see any message and the thread is definitely down. Therefore, I suspect Gunicorn or Django to kill my thread sort of gracefully.
I have put log and print statements all over the code but haven't found anything indicating why my thread is getting killed.
My wsgi.py function where I call the function to start my thread
"""
WSGI config for django_web project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_web.settings')
application = get_wsgi_application()
'''
Start background services
Import has to happen after "get_wsgi_application()"; otherwise docker container crashes
'''
try:
from threading import Thread
from timesheet.admin import runWorkmateServices
runWorkmateServices()
except Exception as exp:
print(exp)
The function which is called from within the wsgi.py. I double check if the thread was started to avoid having two up and running.
def runWorkmateServices(request=None):
service_name = 'TimeKeeperWorkMateReminderService'
thread_found = False
for thread in threading.enumerate():
if service_name in thread.name:
thread_found = True
break # Leave loop now
if thread_found:
print(f'Service has already been started: {service_name}')
if request:
messages.add_message(request, messages.ERROR, f'Service has already been started:: {service_name}')
else:
Thread(target=WorkMateReminders, args=(), name=service_name, daemon=True).start()
print(f'Started Service: {service_name}')
if request:
messages.add_message(request, messages.SUCCESS, f'Started Service: {service_name}')
The worker thread itself
def WorkMateReminders():
print('Thread Started: WorkMateReminders')
timer = 0
employees = User.objects.none()
while True:
try:
# Update user list every n * sleep time (10 minutes)
if timer % 10 == 0:
timer = 0
# Get active employees
employees = User.objects.filter(is_active=True, profile__workmate_sms_reminders_activated=True)
print(f'Employees updated at {datetime.now().date()} - {datetime.now().time()}: {employees}')
WorkMateCheckClockOffTimes(employees=employees)
WorkMateClockOnReminder(employees=employees)
WorkMateEndOfBreakReminder(employees=employees)
timer += 1 # increment timer
except Exception as exp:
print(f'Error: {exp}')
time.sleep(60 * 1)
My goal is to have this worker thread running for as long as Django is up.
Most WSGI servers spawn workers that are killed/recycled fairly regularly, spawning threads from these workers is not the best solution to your problem. There are several ways to go about this
Cron
Create a management command that does what you want and configure cron to run it every 10 minutes
Celery/Celerybeat
Set up a celery worker, this is a process that runs asynchronously to your Django application and using celerybeat you can have tasks run at intervals

Execute Celery's link_error callback on a separate queue/worker

I have 2 apps on 2 separate servers, let's call them A and B. Both apps have a Celery worker active, listening to separate queues (QueueA and QueueB).
Server B pushes a task to QueueB, using apply_async.
Here is server B's tasks:
#app.task(bind=True, queue="QueueB", name="name_on_server_A")
def taskForServerB():
# nothing is executed here
#app.task(bind=True)
def success(result):
print('Task succeeded')
#app.task(bind=True):
def failure(...):
print('task failed')
taskForServerB.s().apply_async(link=success.s(), link_error=failure.s())
On Server A, the task name_on_server_A receives the tasks and executes it. If it completes successfully, the task success is execute properly on ServerB, but it name_on_server_A fails, the task failure is not executed. Instead, Server A throws a NotRegisteredError for a task with name failure.
Is there something I am missing? How can I get the failure task to be executed on ServerB, where the first task is called from?
There are two issues here:
The route of task to the correct queue which you defined for name_on_server_A (with the queue assignment) - which is by the way something that is new for me (I'm using ROUTER in the celery config and route each task by it's name to the right queue.
when you define your celery app you might forgot to include the task failure so it unregister:
app = Celery(broker='amqp://', backend='...', include=['file1.py', 'file2.py', ..])

What's the design solution for this situation?

I have the following situation ( all 3 are functions in a python class ) where I have to send a message to a remote device with 2 callbacks that give detail about the state of the remote device.
# callback when a app has completed downloaded on a remote device
def handleAppDownloadComplete():
#something
# callback when an app has restarted on a remote device
def handleAppRestart():
# app restart callback
def sendMessage(message):
// Do things like validation etc
sendMessageToRemoteDevice(message)
My situation is
1) call sendMessage when handleAppDownloadComplete callback is called
2) At any point during sendMessage(), if handleAppRestart() is called, stop execution of sendMessage(), wait for handleAppDownloadComplete() to be called back and call sendMessage() again.
I have tried to use threading.events(), but this seems very cyclical for me. And to add, both the call backs are provided by third party libraries and I can't change them. Any better way/design to handle this situation?
https://docs.python.org/3/library/asyncio-task.html#future (look at the example)
You could model the call to sendMessage() as a task which could be cancelled by handleAppRestart(). So you'd have a class variable task which would bind to a task.
import asyncio
class foo:
task = None
loop = asyncio.get_event_loop()
def handleAppDownloadComplete()
{
task = asyncio.ensure_future(sendMessage(bar))
loop.run_until_complete(tasks)
}
# callback when an app has restarted on a remote device
def handleAppRestart()
{
task.cancel()
}
#asyncio.coroutine
def sendMessage(message)
{
// Do things like validation etc
sendMessageToRemoteDevice(message)
}
Btw what you gave in your question isn't Python code, and neither is my answer (Python doesn't use {} and I didn't indent correctly).
Anyway, answer is: Use asynchronous abstractions to do what you want.
EDIT: Wait, you can't change handleAppDownloadComplete(), handleAppRestart() or sendMessage(message)?

Categories

Resources