Closing ClientSession when keyboard interrupt - python

I am making a discord bot with python. When a user types a command, this bot brings data from url and shows it. I use aiohttp for asynchronous http request, but documentation of discord.py says,
Since it is better to not create a session for every request, you should store it in a variable and then call session.close on it when it needs to be disposed.
So i changed all my codes from
async with aiohttp.ClientSession() as session:
async with session.get('url') as response:
# something to do
into
# Global variable
session = aiohttp.ClientSession()
async with session.get('url') as response:
# something to do
All http requests use globally defined session. But when i run this code and stop by keyboard interrupt(Ctrl + C), this warning messages are appeared.
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x0000015A45ADBDD8>
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x0000015A464925E8>, 415130.265)]']
connector: <aiohttp.connector.TCPConnector object at 0x0000015A454B3320>
How can i close ClientSession when program stops by keyboard interrupt?
What I tried:
I tried following but nothing worked well.
Making a class and close in its __del__. __del__ was not called when keyboard interrupt.
class Session:
def __init__(self):
self._session = aiohttp.ClientSession()
def __del__(self):
self._session.close()
Infinite loop in main, and catch KeyboardInterrupt. Program is blocked with bot.run() so cannot reach to code.
from discord.ext import commands
if __name__ == "__main__":
bot = commands.Bot()
bot.run(token) # blocked
try:
while(True):
sleep(1)
except KeyboardInterrupt:
session.close()
Close session when bot is disconnected. on_disconnect had been not called when keyboard interrupt.
#bot.event
async def on_disconnect():
await session.close()
edit: I missed await before session.close(), but this was just my mistake when I wrote this question. All I tried also didn't work well as i expected when i wrote correctly with await.

You must await the closing of a ClientSession object:
await session.close()
Notice coroutine in the docs here. Your attempt #3 is probably best suited for this problem, as it is naturally an async function.

I tried the following code and it seems to work well.
import asyncio
import aiohttp
class Session:
def __init__(self):
self._session = aiohttp.ClientSession()
def __del__(self):
loop = asyncio.get_event_loop()
loop.run_until_complete(self.close()
async def close(self):
await self._session.close()
session = Session()

Related

discord.py main thread being blocked

I have a very simple bot that makes a request to https://jsonplaceholder.typicode.com/todos/1 every 0.5 second (this is a practice project, I know it's not recommended) but, Upon running the bot, It starts, but due to the high amount of requests, It won't execute sayHi() due to the thread being blocked.
What I want: Is a way to execute the requests inside another thread, If possible?
#bot.command()
async def sayHi(ctx, arg):
print("Hi")
async def make_request():
r = requests.get("https://jsonplaceholder.typicode.com/todos/1")
print(r)
#tasks.loop(seconds=0.5)
async def scan_loop():
await make_request()
#bot.event
async def on_ready():
print("Bot is running")
scan_loop.start()
bot.run(TOKEN)
Of course, you can use multiprocessing or threading to run a request in another thread but there is a second way: using asyncronous aiohhtp instead of blocking requests library.
So, your make_request function will be something like this with aiohttp:
import aiohttp
async def make_request():
async with aiohttp.ClientSession() as session:
async with session.get("https://jsonplaceholder.typicode.com/todos/1") as response:
print(await response.json())

Why is an asyncio task garbage collected when opening a connection inside it?

I am creating a server which needs to make an external request while responding. To handle concurrent requests I'm using Python's asyncio library. I have followed some examples from the standard library. It seems however that some of my tasks are destroyed, printing Task was destroyed but it is pending! to my terminal. After some debugging and research I found a stackoverflow answer which seemed to explain why.
I have created a minimal example demonstrating this effect below. My question is in what way should one counteract this effect? Storing a hard reference to the task, by for example storing asyncio.current_task() in a global variable mitigates the issue. It also seems to work fine if I wrap the future remote_read.read() as await asyncio.wait_for(remote_read.read(), 5). However I do feel like these solutions are ugly.
# run and visit http://localhost:8080/ in your browser
import asyncio
import gc
async def client_connected_cb(reader, writer):
remote_read, remote_write = await asyncio.open_connection("google.com", 443, ssl=True)
await remote_read.read()
async def cleanup():
while True:
gc.collect()
await asyncio.sleep(1)
async def main():
server = await asyncio.start_server(client_connected_cb, "localhost", 8080)
await asyncio.gather(server.serve_forever(), cleanup())
asyncio.run(main())
I am running Python 3.10 on macOS 10.15.7.
It looks that by the time being, the only way is actually keeping
a reference manually.
Maybe a decorator is something more convenient than having
to manually add the code in each async function.
I opted for the class design, so that a class attribute
can hold the hard-references while the tasks run. (A
local variable in the wrapper function would be part
of the task-reference cycle, and the garbage collection
would trigger all the same):
# run and visit http://localhost:8080/ in your browser
import asyncio
import gc
from functools import wraps
import weakref
class Shielded:
registry = set()
def __init__(self, func):
self.func = func
async def __call__(self, *args, **kw):
self.registry.add(task:=asyncio.current_task())
try:
result = await self.func(*args, **kw)
finally:
self.registry.remove(task)
return result
def _Shielded(func):
# Used along with the print sequence to assert the task was actually destroyed without commenting
async def wrapper(*args, **kwargs):
ref = weakref.finalize(asyncio.current_task(), lambda: print("task destroyed"))
return await func(*args, **kwargs)
return wrapper
#Shielded
async def client_connected_cb(reader, writer):
print("at task start")
#registry.append(asyncio.current_task())
# I've connected this to a socket in an interactive session, I'd explictly .close() for debugging:
remote_read, remote_write = await asyncio.open_connection("localhost", 8060, ssl=False)
print("comensing remote read")
await remote_read.read()
print("task complete")
async def cleanup():
while True:
gc.collect()
await asyncio.sleep(1)
async def main():
server = await asyncio.start_server(client_connected_cb, "localhost", 8080)
await asyncio.gather(server.serve_forever(), cleanup())
asyncio.run(main())
Moreover, I wanted to "really see it", so I created a "fake" _Shielded
decorator that would just log something when the underlying task
got deleted: "task complete" is never printed with it, indeed.

Discord cogs doesn't load

I am trying to run two different Discord Bots using a single python script using cogs. But when I try to run the 2nd bot it throws an ImportError even-though I didn't use that specific Library. The reaction roles bot works fine without the anti spam bot. Here's my code. FYI I am working inside a Virtual Env.
main.py
if __name__ == "__main__":
try:
reaction_role_bot = commands.Bot(command_prefix=config["reaction_role_bot"]["bot_prefix"], intents=discord.Intents.all())
reaction_slash = SlashCommand(reaction_role_bot, sync_commands=True)
reaction_role_bot.load_extension(f"cogs.{str(os.path.basename('cogs/reaction_roles.py')[:-3])}")
anti_spam_bot = commands.Bot(command_prefix=config["anti_spam_bot"]["bot_prefix"], intents=discord.Intents.default())
spam_slash = SlashCommand(anti_spam_bot, sync_commands=True)
anti_spam_bot.load_extension(f"cogs.{str(os.path.basename('cogs/anti_spam.py')[:-3])}")
event_loop = asyncio.get_event_loop()
event_loop.create_task(reaction_role_bot.run(config["reaction_role_bot"]["token"]))
event_loop.create_task(anti_spam_bot.run(config["anti_spam_bot"]["token"]))
event_loop.run_forever()
except Exception as e:
print(e)
anti_spam.py
import platform
import os
import discord
from discord.ext import commands
from antispam import AntiSpamHandler
from antispam.plugins import AntiSpamTracker, Options
class AntiSpamBot(commands.Cog):
def __init__(self, client):
self.client = client
# Initialize the AntiSpamHandler
self.client.handler = AntiSpamHandler(self.client, options=Options(no_punish=True))
# 3 Being how many 'punishment requests' before is_spamming returns True
self.client.tracker = AntiSpamTracker(self.client.handler, 3)
self.client.handler.register_extension(self.client.tracker)
#commands.Cog.listener()
async def on_ready(self):
print("---------------------------------")
print(f"Logged in as {str(self.client.user)}")
print(f"Discord.py API version: {discord.__version__}")
print(f"Python version: {platform.python_version()}")
print(f"Running on: {platform.system()} {platform.release()} ({os.name})")
await self.client.change_presence(status=discord.Status.idle, activity=discord.Game(name="Head of Security"))
print("---------------------------------\n")
# The code in this event is executed every time a valid commands catches an error
#commands.Cog.listener()
async def on_command_error(context, error):
raise error
#commands.Cog.listener()
async def on_message(self, message):
await self.client.handler.propagate(message)
if self.client.tracker.is_spamming(message):
await message.delete()
await message.channel.send(f"{message.author.mention} has been automatically kicked for spamming.")
await message.author.kick()
await self.client.process_commands(message)
def setup(client):
client.add_cog(AntiSpamBot(client))
Error
Extension 'cogs.anti_spam' raised an error: ImportError: cannot import name 'AsyncMock' from 'unittest.mock' (/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/unittest/mock.py)
I've no experiences using cogs and its bit confusing me. Any kind of help would help me to sort this out! Thanks in advance!
I do not believe this is a cog registration issue. I believe this is an import error from some of the dependencies in your cog file. I googled your error and found something similar, I recommend checking it out here for some more information.
As a blanket statement, I would double check that you have mock installed, and that you're installing it on the version of Python that you think you're installing it on. It can get wonky if you have multiple python versions insealled.
Also, on an unrelated note:: It is best to avoid running multiple bot instances in one python file, but I can help you do it the best way possible.
For starters, you have to realize that Client.run is an abstraction of a couple of more lower level concepts.
There is Client.login which logs in the client and then Client.connect which actually runs the processing. These are coroutines.
asyncio provides the capability of putting things in the event loop for it to work whenever it has time to.
Something like this e.g.
loop = asyncio.get_event_loop()
async def foo():
await asyncio.sleep(10)
loop.close()
loop.create_task(foo())
loop.run_forever()
If we want to wait for something to happen from another coroutine, asyncio provides us with this functionality as well through the means of synchronisation via asyncio.Event. You can consider this as a boolean that you are waiting for:
e = asyncio.Event()
loop = asyncio.get_event_loop()
async def foo():
await e.wait()
print('we are done waiting...')
loop.stop()
async def bar():
await asyncio.sleep(20)
e.set()
loop.create_task(bar())
loop.create_task(foo())
loop.run_forever() # foo will stop this event loop when 'e' is set to true
loop.close()
Using this concept we can apply it to the discord bots themselves.
import asyncio
import discord
from collections import namedtuple
# First, we must attach an event signalling when the bot has been
# closed to the client itself so we know when to fully close the event loop.
Entry = namedtuple('Entry', 'client event')
entries = [
Entry(client=discord.Client(), event=asyncio.Event()),
Entry(client=discord.Client(), event=asyncio.Event())
]
# Then, we should login to all our clients and wrap the connect call
# so it knows when to do the actual full closure
loop = asyncio.get_event_loop()
async def login():
for e in entries:
await e.client.login()
async def wrapped_connect(entry):
try:
await entry.client.connect()
except Exception as e:
await entry.client.close()
print('We got an exception: ', e.__class__.__name__, e)
entry.event.set()
# actually check if we should close the event loop:
async def check_close():
futures = [e.event.wait() for e in entries]
await asyncio.wait(futures)
# here is when we actually login
loop.run_until_complete(login())
# now we connect to every client
for entry in entries:
loop.create_task(wrapped_connect(entry))
# now we're waiting for all the clients to close
loop.run_until_complete(check_close())
# finally, we close the event loop
loop.close()

Asynchronous Python server: fire and forget at startup

I am writing a server that must handle asynchronous tasks. I'd rather stick to asyncio for the asynchronous code, so I chose to use Quart[-OpenAPI] framework with Uvicorn.
Now, I need to run a task (master.resume() in the code below) when the server is starting up without waiting for it to finish, that is, firing and forgetting it.
I'm not sure if it's even possible with asyncio, as I cannot await for this task but if I don't I get a coroutine X was never awaited error. Using loop.run_until_complete() as suggested in this answer would block the server until the task completes.
Here's a skeleton of the code that I have:
import asyncio
from quart_openapi import Pint, Resource
app = Pint(__name__, title="Test")
class Master:
...
async def resume():
await coro()
async def handle_get()
...
#app.route("/test")
class TestResource(Resource):
async def get(self):
print("Received get")
asyncio.create_task(master.handle_get())
return {"message": "Request received"}
master = Master()
# How do I fire & forget master.resume()?
master.resume() # <== This throws "RuntimeWarning: coroutine 'Master.resume' was never awaited"
asyncio.get_event_loop().run_until_complete(master.resume()) # <== This prevents the server from starting
Should this not be achievable with asyncio/Quart, what would be the proper way to do it?
It is see these docs, in summary,
#app.before_serving
async def startup():
asyncio.ensure_future(master.resume())
I'd hold on to the task though, so that you can cancel it at shutdown,
#app.before_serving
async def startup():
app.background_task = asyncio.ensure_future(master.resume())
#app.after_serving
async def shutdown():
app.background_task.cancel() # Or something similar

Python asyncio, aioamqp callbacks and pytest

I'm trying to write some tests for some asynchronous Python code using the aioamqp message broker, but pytest and callbacks fail me.
Simply put, when the aioamqp basic_consume() function receives a message and calls the assigned asynchronous callback, inside the callback I can do whatever I like -- reference unassigned variables, assert something outrageous -- and pytest happily passes the test. Clearly an exception gets raised under the hood and the test is interrupted, since the callback function never runs further than the first failing line, but the failure never rises all the way to pytest.
Here's a code snippet to demonstrate:
import aioamqp
import asyncio
import pytest
MQ_HOST = '0.0.0.0'
MQ_PORT = 5672
MQ_LOGIN = 'login'
MQ_PASSWORD = 'password'
class MockMQ:
def __init__(self):
self.loop = asyncio.get_event_loop()
self.transport = None
self.protocol = None
async def connect(self):
try:
self.transport, self.protocol = await aioamqp.connect(
host=MQ_HOST, port=MQ_PORT, login=MQ_LOGIN, password=MQ_PASSWORD
)
self.channel = await self.protocol.channel()
except aioamqp.AmqpClosedConnection:
print('closed connection')
return
async def close(self):
await self.protocol.close()
self.transport.close()
async def publish(self, data, queue_name, exchange='', properties=None):
queue = await self.channel.queue_declare(queue_name)
await self.channel.publish(data, exchange, queue_name, properties=properties)
async def consume(self, callback, queue_name):
await self.channel.basic_consume(callback, queue_name=queue_name)
#pytest.mark.asyncio
async def test_mq():
"""Basic ping-pong test for RabbitMQ."""
QUEUE_NAME = 'my_queue'
#pytest.mark.asyncio
async def callback(channel, body, envelope, properties):
"""This is the callback called when a MQ message is consumed."""
print('we are here')
await channel.basic_client_ack(envelope.delivery_tag)
print(body) # this gets printed as well
foo = bar * 2 # this is where we fail
assert body == b'bar'
print('we never arrive here')
mq = MockMQ()
await mq.connect()
await mq.consume(callback, QUEUE_NAME)
await mq.publish(b'foo', QUEUE_NAME)
await asyncio.sleep(1.0)
await mq.close()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(test_mq())
Running this via the main program with IPython results correctly in an exception, since it doesn't get swallowed by pytest.
What is the proper way of writing tests for pytest in this case? pytest-asyncio does not seem to affect this issue in the least.
EDIT: I might as well add that my dev environment uses Django and pytest-django, but removing it doesn't change the result either.

Categories

Resources