Cannot trigger an async function from another threaded function in Python - python

I am making a discord bot that will grab a json using requests from time to time, and then send the relevant information to a specific channel.
I have the following classes:
Helper, which is the discord bot itself, that runs async from the start, inside an asyncio.gather;
tasker that controls the interval which calls the class that will do the requests. It runs in a different thread so it doesn't stop the async Helper while it waits
getInfo that does the requests, store the info and should talk with Helper
I am having 2 problems right now:
While the tasker is on a different thread, every time I try to talk with Helper via getInfo it gives me the errors RuntimeError: no running event loop and RuntimeWarning: coroutine 'getInfo.discordmsg' was never awaited
If I dont run it on a different thread, however, it does work on the TestStatus: 1 but it makes Helper get stuck and stop running with TestStatus: 2
Anyway, here is the code
import requests
import asyncio
import discord
from discord.ext import commands, tasks
from datetime import datetime, timedelta
import threading
class Helper(discord.Client):
async def on_ready(self):
global discordbot, taskervar
servername = 'ServerName'
discordbot = self
self.servidores = dict()
self.canais = dict()
for i in range(len(self.guilds)):
self.servidores[self.guilds[i].name] = {}
self.servidores[self.guilds[i].name]['guild']=self.guilds[i]
servidor = self.guilds[i]
for k in range(len(servidor.channels)):
canal = servidor.channels[k]
self.canais[str(canal.name)] = canal
if 'bottalk' not in self.canais.keys():
newchan = await self.servidores[self.guilds[i].name]['guild'].create_text_channel('bottalk')
self.canais[str(newchan.name)] = newchan
self.servidores[self.guilds[i].name]['canais'] = self.canais
self.bottalk = self.get_channel(self.servidores[servername]['canais']['bottalk'].id)
await self.msg("Bot online: " + converteHora(datetime.now(),True))
print(f'{self.user} has connected to Discord!')
taskervar.startprocess()
async def msg(self, msg):
await self.bottalk.send(msg)
async def on_message(self, message):
if message.author == self.user:
return
else:
print(message)
class tasker:
def __init__(self):
global discordbot, taskervar
print('Tasker start')
taskervar = self
self.waiter = threading.Event()
self.lastupdate = datetime.now()
self.nextupdate = datetime.now()
self.thread = threading.Thread(target=self.requests)
def startprocess(self):
if not self.thread.is_alive():
self.waiter = threading.Event()
self.interval = 60*5
self.thread = threading.Thread(target=self.requests)
self.thread.start()
def requests(self):
while not self.waiter.is_set():
getInfo()
self.lastupdate = datetime.now()
self.nextupdate = datetime.now()+timedelta(seconds=self.interval)
self.waiter.wait(self.interval)
def stopprocess(self):
self.waiter.set()
class getInfo:
def __init__(self):
global discordbot, taskervar
self.requests()
async def discordmsg(self,msg):
await discordbot.msg(msg)
def requests(self):
jsondata = {"TestStatus": 1}
if jsondata['TestStatus'] == 1:
print('here')
asyncio.create_task(self.discordmsg("SOMETHING WENT WRONG"))
taskervar.stopprocess()
return
elif jsondata['TestStatus'] == 2:
print('test')
hora = converteHora(datetime.now(),True)
asyncio.create_task(self.discordmsg(str("Everything is fine but not now: " + hora )))
print('test2')
def converteHora(dateUTC, current=False):
if current:
response = (dateUTC.strftime("%d/%m/%Y, %H:%M:%S"))
else:
response = (dateutil.parser.isoparse(dateUTC)-timedelta(hours=3)).strftime("%d/%m/%Y, %H:%M:%S")
return response
async def main():
TOKEN = 'TOKEN GOES HERE'
tasker()
await asyncio.gather(
await Helper().start(TOKEN)
)
if __name__ == '__main__':
asyncio.run(main())

Your primary problem is you don't give your secondary thread access to the asyncio event loop. You can't just await and/or create_task a coroutine on a global object (One of many reasons to avoid using global objects in the first place). Here is how you could modify your code to accomplish that:
class tasker:
def __init__(self):
# ...
self.loop = asyncio.get_running_loop()
# ...
class getInfo:
#...
def requests(self):
# replace the create_tasks calls with this.
asyncio.run_coroutine_threadsafe(self.discordmsg, taskervar.loop)
This uses your global variables because I don't want to rewrite your entire program, but I still strongly recommend avoiding them and considering a re-write yourself.
All that being said, I suspect you will still have this bug:
If I dont run it on a different thread, however, it does work on the TestStatus: 1 but it makes Helper get stuck and stop running with TestStatus: 2
I can't tell what would cause this issue and I'm running into trouble reproducing this on my machine. Your code is pretty hard to read and is missing some details for reproducibility. I would imagine that is part of the reason why you didn't get an answer in the first place. I'm sure you're aware of this article but might be worth a re-visit for better practices in sharing code. https://stackoverflow.com/help/minimal-reproducible-example

Related

Keep getting error function was never awaited asyncio for python

I can't figure this out at all, I looked all the questions, videos, and documents. So I keep getting back was never awaited for async def renk_calc() -> bool: So I'm trying to get this function to preform its calculation and to return. I can't get it to work, no matter what method I tried, I ran into a different problem.
From reading trying to understand I have to run this function on separate thread or a separate processor, but I kept getting errors back its not coroutine is not callable, and coroutine is none, and coroutine cannot be a pickle. It's probably because I'm very new to asyncio, and I was reading the documentation to implement a solution.
but in essence this function gets input from async def renko_append(): as this appends the dictionary then that feeds the DataFrame, and I know the DataFrame is causing the issue because it's not a awaitable object or curatable object. But I don't know how to fix it.
then the dictionary gets cleared. So it loops back and refills it again and that's the marry go round. The idea is then to grab the returned value in the function and utilize it in the next block.
nest_asyncio.apply()
list = []
renk = {"DATE":[],"OPEN":[],"HIGH":[],"LOW":[],"CLOSE":[]}
async def renko_append():
current_time = datetime.now()
global list
# print(f"this is the {list}")
renk["DATE"].append(current_time.strftime("%Y-%m-%d %H:%M:%S"))
renk["OPEN"].append(list[0])
renk["HIGH"].append(max(list))
renk["LOW"].append(min(list))
renk["CLOSE"].append(list[-1])
print(renk)
list.clear()
async def renk_calc() -> bool:
df = pd.DataFrame.from_dict(renk)
df.columns = [i.lower() for i in df.columns]
renko_ind = indicators.Renko(df)
renko_ind.brick_size = 0.0006
renko_ind.chart_type = indicators.Renko.PERIOD_CLOSE
data = renko_ind.get_ohlc_data()
result = data["uptrend"]
return result
async def clr_dict():
renk["DATE"] = []
renk["OPEN"] = []
renk["HIGH"] = []
renk["LOW"] = []
renk["CLOSE"] =[]
print(renk)
async def main():
async def deal_msg(msg):
if msg['topic'] == '/contractMarket/ticker:ADAUSDTM':
ns = msg["data"]["ts"]
time = datetime.fromtimestamp(ns // 1000000000)
my_time = time.strftime("%H:%M:%S")
price = msg["data"]["price"]
# print(f'Get ADAUSDTM Ticker:price: {price} side: {msg["data"]["side"]} time: {my_time}')
list.append(price)
# client = WsToken()
client = WsToken(key='', secret='', passphrase='', is_sandbox=False, url='')
ws_client = await KucoinFuturesWsClient.create(loop, client, deal_msg, private=False)
await ws_client.subscribe('/contractMarket/ticker:ADAUSDTM')
while True:
await asyncio.sleep(10, loop=loop)
await asyncio.create_task(renko_append())
await asyncio.create_task(renk_calc())
await asyncio.create_task(clr_dict())
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
I edited to add the error: I figured out the "await" problem, but now I'm stuck on this error.
TypeError: 'coroutine' object is not callable
async def renk_calc() this is whats not callable

How to check what coroutine is completed after asyncio.wait

Consider the following code:
import random
import asyncio
class RandomLife(object):
def __init__(self, name: str):
self.name = name
self.coro = asyncio.sleep(random.randrange(0, 5))
def __await__(self):
return self.coro.__await__()
async def main():
objects = [RandomLife("one"), RandomLife("two"), RandomLife("three")]
finished, unfinished = await asyncio.wait(objects, return_when=asyncio.FIRST_COMPLETED)
print(finished)
await asyncio.wait(unfinished)
if __name__ == "__main__":
asyncio.run(main())
After then first asyncio.wait I want to know what instance of RandomLife has completed. But the finished variable is a set of Task s, rather than a RandomLife instance. How do I convert this task to a RandomLife? Is it possible?
As the documentation warns:
Note wait() schedules coroutines as Tasks automatically and later returns those implicitly created Task objects in (done, pending) sets. Therefore the following code won’t work as expected:
async def foo():
return 42
coro = foo()
done, pending = await asyncio.wait({coro})
if coro in done:
# This branch will never be run!
Here is how the above snippet can be fixed:
async def foo():
return 42
task = asyncio.create_task(foo())
done, pending = await asyncio.wait({task})
if task in done:
# Everything will work as expected now.
We can employ the same trick. First, we need to wrap all the coroutines to tasks, and then set up mapping from a task created to its RandomLife instance:
import random
import asyncio
class RandomLife(object):
def __init__(self, name: str):
self.name = name
self.coro = asyncio.sleep(random.randrange(0, 5))
def __await__(self):
return self.coro.__await__()
async def main():
objects = [RandomLife("one"), RandomLife("two"), RandomLife("three")]
# Wrap all the coros to tasks, as the documentation suggests.
tasks = [asyncio.create_task(o.coro) for o in objects]
# Set up mapping from tasks created to RandomLife instances.
task2life = dict(zip(tasks, objects))
finished, unfinished = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
# Get first task finished.
finised_task = list(finished)[0]
# Map it back to the RandomLife instance.
finished_life = task2life[finised_task]
print(finished_life.name)
await asyncio.wait(unfinished)
if __name__ == "__main__":
asyncio.run(main())

Copying contexvars.Context between tasks

I have a program (an ASGI server) that is structured roughly like this:
import asyncio
import contextvars
ctxvar = contextvars.ContextVar("ctx")
async def lifepsan():
ctxvar.set("spam")
async def endpoint():
assert ctxvar.get() == "spam"
async def main():
ctx = contextvars.copy_context()
task = asyncio.create_task(lifepsan())
await task
task = asyncio.create_task(endpoint())
await task
asyncio.run(main())
Because the lifespan event / endpoints are run in tasks, they can't share contextvars.
This is by design: tasks copy the context before executing, so lifespan can't set ctxvar properly.
This is the desired behavior for endpoints, but I would like for execution to appear like this (from a user's perspective):
async def lifespan():
ctxvar.set("spam")
await endpoint()
In other words, the endpoints are executed in their own independent context, but within the context of the lifespan.
I tried to get this to work by using contextlib.copy_context():
import asyncio
import contextvars
ctxvar = contextvars.ContextVar("ctx")
async def lifepsan():
ctxvar.set("spam")
print("set")
async def endpoint():
print("get")
assert ctxvar.get() == "spam"
async def main():
ctx = contextvars.copy_context()
task = ctx.run(asyncio.create_task, lifepsan())
await task
endpoint_ctx = ctx.copy()
task = endpoint_ctx.run(asyncio.create_task, endpoint())
await task
asyncio.run(main())
As well as:
async def main():
ctx = contextvars.copy_context()
task = asyncio.create_task(ctx.run(lifespan))
await task
endpoint_ctx = ctx.copy()
task = asyncio.create_task(endpoint_ctx.run(endpoint))
await task
However it seems that contextvars.Context.run does not work this way (I guess the context is bound when the coroutine is created but not when it is executed).
Is there a simple way to achieve the desired behavior, without restructuring how the tasks are being created or such?
Here's what I came up with, inspired by PEP 555 and asgiref:
from contextvars import Context, ContextVar, copy_context
from typing import Any
def _set_cvar(cvar: ContextVar, val: Any):
cvar.set(val)
class CaptureContext:
def __init__(self) -> None:
self.context = Context()
def __enter__(self) -> "CaptureContext":
self._outer = copy_context()
return self
def sync(self):
final = copy_context()
for cvar in final:
if cvar not in self._outer:
# new contextvar set
self.context.run(_set_cvar, cvar, final.get(cvar))
else:
final_val = final.get(cvar)
if self._outer.get(cvar) != final_val:
# value changed
self.context.run(_set_cvar, cvar, final_val)
def __exit__(self, *args: Any):
self.sync()
def restore_context(context: Context) -> None:
"""Restore `context` to the current Context"""
for cvar in context.keys():
try:
cvar.set(context.get(cvar))
except LookupError:
cvar.set(context.get(cvar))
Usage:
import asyncio
import contextvars
ctxvar = contextvars.ContextVar("ctx")
async def lifepsan(cap: CaptureContext):
with cap:
ctxvar.set("spam")
async def endpoint():
assert ctxvar.get() == "spam"
async def main():
cap = CaptureContext()
await asyncio.create_task(lifepsan(cap))
restore_context(cap.context)
task = asyncio.create_task(endpoint())
await task
asyncio.run(main())
The sync() method is provided in case the task is long-running and you need to capture the context before it finishes. A somewhat contrived example:
import asyncio
import contextvars
ctxvar = contextvars.ContextVar("ctx")
async def lifepsan(cap: CaptureContext, event: asyncio.Event):
with cap:
ctxvar.set("spam")
cap.sync()
event.set()
await asyncio.sleep(float("inf"))
async def endpoint():
assert ctxvar.get() == "spam"
async def main():
cap = CaptureContext()
event = asyncio.Event()
asyncio.create_task(lifepsan(cap, event))
await event.wait()
restore_context(cap.context)
task = asyncio.create_task(endpoint())
await task
asyncio.run(main())
I think it would still be much nicer if contextvars.Context.run worked with coroutines.
This feature will be supported in Python 3.11: https://github.com/python/cpython/issues/91150
You will be able to write:
async def main():
ctx = contextvars.copy_context()
task = asyncio.create_task(lifepsan(), context=ctx)
await task
endpoint_ctx = ctx.copy()
task = asyncio.create_task(endpoint(), context=endpoint_ctx)
await task
In the meantime, in current Python versions you will need a backport of this feature. I can't think of a good one, but a bad one is here.

Get data out of Redis Subscription not possible?

I am trying to obtain data from a redis channel by using a subscription on my client application. I am using python with asyncio and aioredis for this purpose.
I would like to use my subscription to have a variable of my main application updated when this one changes on the server, but I cannot manage to pass the data received from the subscription to my main thread.
According to aioredis website, I implemented my Subscription with:
sub = await aioredis.create_redis(
'redis://localhost')
ch1 = await sub.subscribe('channel:1')
assert isinstance(ch1, aioredis.Channel)
async def async_reader(channel, globarVar):
while await channel.wait_message():
msg = await channel.get(encoding='utf-8')
# ... process message ...
globarVar = float(msg)
print("message in {}: {}".format(channel.name, msg))
tsk1 = asyncio.ensure_future(async_reader(ch1, upToDateValue))
But I cannot get to update the global variable, I guess python pass just the current value as argument (which I expected to, but wanted to be sure).
Is there any viable option to get data out of a subscription? or to pass a reference to a shared variable or queue I could use?
You should redesign your code so you don't need a global variable. All of your processing should occur when receiving the message. However to modify a global variable you need to declare it in the function with the global keyword. You don't pass global variables around - you just use them.
Sub:
import aioredis
import asyncio
import json
gvar = 2
# Do everything you need here or call another function
# based on the message. Don't use a global variable.
async def process_message(msg):
global gvar
gvar = msg
async def async_reader(channel):
while await channel.wait_message():
j = await channel.get(encoding='utf-8')
msg = json.loads(j)
if msg == "stop":
break
print(gvar)
await process_message(msg)
print(gvar)
async def run(loop):
sub = await aioredis.create_redis('redis://localhost')
res = await sub.subscribe('channel:1')
ch1 = res[0]
assert isinstance(ch1, aioredis.Channel)
await async_reader(ch1)
await sub.unsubscribe('channel:1')
sub.close()
loop = asyncio.get_event_loop()
loop.run_until_complete( run(loop) )
loop.close()
publisher:
import asyncio
import aioredis
async def main():
pub = await aioredis.create_redis('redis://localhost')
res = await pub.publish_json('channel:1', ["Hello", "world"])
await asyncio.sleep(1)
res = await pub.publish_json('channel:1', "stop")
pub.close()
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())

How to use an asyncio loop inside another asyncio loop

I have been trying all kinds of things to be able to use an asyncio loop inside another asyncio loop. Most of the time my test just end in errors, such as:
RuntimeError: This event loop is already running
My example code below is just the base test I started with, so you can see the basics of what I am trying to do. I tried so many things after this test, it was just too confusing, so I figured I should keep it simple when asking for help. If anyone can point me in the right direction, that would be great. Thank you for your time!
import asyncio
async def fetch(data):
message = 'Hey {}!'.format(data)
other_data = ['image_a.com', 'image_b.com', 'image_c.com']
images = sub_run(other_data)
return {'message' : message, 'images' : images}
async def bound(sem, data):
async with sem:
r = await fetch(data)
return r
async def build(dataset):
tasks = []
sem = asyncio.Semaphore(400)
for data in dataset:
task = asyncio.ensure_future(bound(sem, data))
tasks.append(task)
r = await asyncio.gather(*tasks)
return r
def run(dataset):
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(build(dataset))
responses = loop.run_until_complete(future)
loop.close()
return responses
async def sub_fetch(data):
image = 'https://{}'.format(data)
return image
async def sub_bound(sem, data):
async with sem:
r = await sub_fetch(data)
return r
async def sub_build(dataset):
tasks = []
sem = asyncio.Semaphore(400)
for data in dataset:
task = asyncio.ensure_future(sub_bound(sem, data))
tasks.append(task)
r = await asyncio.gather(*tasks)
return r
def sub_run(dataset):
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(sub_build(dataset))
responses = loop.run_until_complete(future)
loop.close()
return responses
if __name__ == '__main__':
dataset = ['Joe', 'Bob', 'Zoe', 'Howard']
responses = run(dataset)
print (responses)
Running loop.run_until_compete inside a running event loop would block the outer loop, thus defeating the purpose of using asyncio. Because of that, asyncio event loops aren't recursive, and one shouldn't need to run them recursively. Instead of creating an inner event loop, await a task on the existing one.
In your case, remove sub_run and simply replace its usage:
images = sub_run(other_data)
with:
images = await sub_build(other_data)
And it will work just fine, running the sub-coroutines and not continuing with the outer coroutine until the inner one is complete, as you likely intended from the sync code.

Categories

Resources