What I am doing wrong? Is there a fix? I'm new to async programming; it's very confusing.
# myFile.py
import httpx
async def ping_api():
async with httpx.AsyncClient() as client:
sleep(1)
print('right after with')
sleep(1)
print('before await')
sleep(1)
response = await client.get(url, params=params)
sleep(1)
print('after await')
sleep(1)
data = response.json() # what's wrong here?
sleep(1)
print('after json')
sleep(1)
return data
# myFastAPI.py
from myFile import ping_api
#app...
async def main():
data = await ping_api()
Resulting error:
before await
after await
C:\Users\foo\grok\site-packages\httpx\_client.py:1772: UserWarning: Unclosed <authlib.integrations.httpx_client.oauth2_client.AsyncOAuth2Client object at 0x0000021F318EC5E0>. See https://www.python-httpx.org/async/#opening-and-closing-clients for details.
warnings.warn(
after json
Shouldn't the context manager automatically close the connection?
Is this a bug in the library or am I missing something?
Is this response.json() the cause or is the problem elsewhere but just happens to 'print' at this point?
https://github.com/encode/httpx/issues/1332
Found cause:
The issue was actually my usage of another library(tda-api). Found answer Here; "Do not attempt to use more than one Client object per token file, as
this will likely cause issues with the underlying OAuth2 session management". My mistake was causing the 'printing' of the error in function calls that had no obvious relationship to me (e.g. datetime.combine(), response.json()). Instead of invoking the creation of the client object within the function, I created it outside and passed the client object as a parameter argument to my various async def functions.
The error does not occur in sync def functions because it does not yield the thread to the event loop at any point before returning. This means there are no concurrent Client objects invoked at the same time. Thus, the 1:1 Client object:token file ratio is not violated in the synchronous case and creating the Client object inside the function is not an issue.
Related
I have a scraper that requires the use of a websocket server (can't go into too much detail on why because of company policy) that I'm trying to turn into a template/module for easier use on other websites.
I have one main function that runs the loop of the server (e.g. ping-pongs to keep the connection alive and send work and stop commands when necessary) that I'm trying to turn into a generator that yields the HTML of scraped pages (asynchronously, of course). However, I can't figure out a way to turn the server into a generator.
This is essentially the code I would want (simplified to just show the main idea, of course):
import asyncio, websockets
needsToStart = False # Setting this to true gets handled somewhere else in the script
async def run(ws):
global needsToStart
while True:
data = await ws.recv()
if data == "ping":
await ws.send("pong")
elif "<html" in data:
yield data # Yielding the page data
if needsToStart:
await ws.send("work") # Starts the next scraping session
needsToStart = False
generator = websockets.serve(run, 'localhost', 9999)
while True:
html = await anext(generator)
# Do whatever with html
This, of course, doesn't work, giving the error "TypeError: 'Serve' object is not callable". But is there any way to set up something along these lines? An alternative I could try is creating an 'intermittent' object that holds the data which the end loop awaits, but that seems messier to me than figuring out a way to get this idea to work.
Thanks in advance.
I found a solution that essentially works backwards, for those in need of the same functionality: instead of yielding the data, I pass along the function that processes said data. Here's the updated example case:
import asyncio, websockets
from functools import partial
needsToStart = False # Setting this to true gets handled somewhere else in the script
def process(html):
pass
async def run(ws, htmlFunc):
global needsToStart
while True:
data = await ws.recv()
if data == "ping":
await ws.send("pong")
elif "<html" in data:
htmlFunc(data) # Processing the page data
if needsToStart:
await ws.send("work") # Starts the next scraping session
needsToStart = False
func = partial(run, htmlFunc=process)
websockets.serve(func, 'localhost', 9999)
I found an async httpx example where ensure_future works but create_task doesn't, but I can't figure out why. As I've understood that create_task is the preferred approach, I'm wondering what's happening and how I may solve the problem.
I've been using an async httpx example at https://www.twilio.com/blog/asynchronous-http-requests-in-python-with-httpx-and-asyncio:
import asyncio
import httpx
import time
start_time = time.time()
async def get_pokemon(client, url):
resp = await client.get(url)
pokemon = resp.json()
return pokemon['name']
async def main():
async with httpx.AsyncClient() as client:
tasks = []
for number in range(1, 151):
url = f'https://pokeapi.co/api/v2/pokemon/{number}'
tasks.append(asyncio.ensure_future(get_pokemon(client, url)))
original_pokemon = await asyncio.gather(*tasks)
for pokemon in original_pokemon:
print(pokemon)
asyncio.run(main())
print("--- %s seconds ---" % (time.time() - start_time))
When run verbatim, the code produces the intended result (a list of Pokemon in less than a second). However, replacing the asyncio.ensure_future with asyncio.create_task instead leads to a long wait (which seems to be related to a DNS lookup timing out) and then exceptions, the first one being:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/anyio/_core/_sockets.py", line 186, in connect_tcp
addr_obj = ip_address(remote_host)
File "/usr/lib/python3.10/ipaddress.py", line 54, in ip_address
raise ValueError(f'{address!r} does not appear to be an IPv4 or IPv6 address')
ValueError: 'pokeapi.co' does not appear to be an IPv4 or IPv6 address
Reducing the range maximum (to 70 on my computer) makes the problem disappear.
I understand https://stackoverflow.com/a/36415477/ as saying that ensure_future and create_task act similarly when given coroutines unless there's a custom event loop, and that create_task is recommended.
If so, why does one of the approaches work while the other fails?
I'm using Python 3.10.5 and httpx 0.23.0.
Here is the source code from the standard Python library (Python 3.10, module asyncio.tasks.py):
def ensure_future(coro_or_future, *, loop=None):
"""Wrap a coroutine or an awaitable in a future.
If the argument is a Future, it is returned directly.
"""
return _ensure_future(coro_or_future, loop=loop)
def _ensure_future(coro_or_future, *, loop=None):
if futures.isfuture(coro_or_future):
if loop is not None and loop is not futures._get_loop(coro_or_future):
raise ValueError('The future belongs to a different loop than '
'the one specified as the loop argument')
return coro_or_future
called_wrap_awaitable = False
if not coroutines.iscoroutine(coro_or_future):
if inspect.isawaitable(coro_or_future):
coro_or_future = _wrap_awaitable(coro_or_future)
called_wrap_awaitable = True
else:
raise TypeError('An asyncio.Future, a coroutine or an awaitable '
'is required')
if loop is None:
loop = events._get_event_loop(stacklevel=4)
try:
return loop.create_task(coro_or_future)
except RuntimeError:
if not called_wrap_awaitable:
coro_or_future.close()
raise
As you can see, ensure_future does some type checking first. Then it gets an event loop if the loop keyword is not defined. Then it calls create_task and returns the result.
If you see a difference, the only possibility is that getting an event loop is somehow causing it. This doesn't solve your issue but it might help to direct your debugging efforts.
After more debugging, I've found out that the problem lies elsewhere.
It appears that httpx doesn't use DNS precaching, so when asked to connect to the same host a bunch of times at once, it'll do a large number of DNS lookups. In turn, that caused the DNS server to fail to respond to requests some of the time.
As luck would have it, even though I tested many times, the request storm happened to make the DNS fail exactly when I was using create_task but not when I was using ensure_future.
In short, due to Murphy's law I was mistaking a nondeterministic problem for a deterministic one. However, it seems that httpx can in general be a bit fickle when it comes to DNS requests, for instance as reported at https://github.com/encode/httpx/discussions/2321.
I am using Prefect. And I tried to download a file from S3.
When I hard coded the AWS credentials, the file can be downloaded successfully:
import asyncio
from prefect_aws.s3 import s3_download
from prefect_aws.credentials import AwsCredentials
from prefect import flow, get_run_logger
#flow
async def fetch_taxi_data():
logger = get_run_logger()
credentials = AwsCredentials(
aws_access_key_id="xxx",
aws_secret_access_key="xxx",
)
data = await s3_download(
bucket="hongbomiao-bucket",
key="hm-airflow/taxi.csv",
aws_credentials=credentials,
)
logger.info(data)
if __name__ == "__main__":
asyncio.run(fetch_taxi_data())
Now I tried to load the credentials from Prefect Blocks.
I created a AWS Credentials Block:
However,
aws_credentials_block = AwsCredentials.load("aws-credentials-block")
data = await s3_download(
bucket="hongbomiao-bucket",
key="hm-airflow/taxi.csv",
aws_credentials=aws_credentials_block,
)
throws the error:
AttributeError: 'coroutine' object has no attribute 'get_boto3_session'
And
aws_credentials_block = AwsCredentials.load("aws-credentials-block")
credentials = AwsCredentials(
aws_access_key_id=aws_credentials_block.aws_access_key_id,
aws_secret_access_key=aws_credentials_block.aws_secret_access_key,
)
data = await s3_download(
bucket="hongbomiao-bucket",
key="hm-airflow/taxi.csv",
aws_credentials=credentials,
)
throws the error:
AttributeError: 'coroutine' object has no attribute 'aws_access_key_id'
I didn't find any useful document about how to use it.
Am I supposed to use Blocks to load credentials? If it is, what is the correct way to use Blocks correctly in Prefect? Thanks!
I just found the snippet in the screenshot in the question misses an await.
After adding await, it works now!
aws_credentials_block = await AwsCredentials.load("aws-credentials-block")
data = await s3_download(
bucket="hongbomiao-bucket",
key="hm-airflow/taxi.csv",
aws_credentials=aws_credentials_block,
)
UPDATE:
Got an answer from Michael Adkins on GitHub, and thanks!
await is only needed if you're writing an async flow or task. For users writing synchronous code, an await is not needed (and not possible). Most of our users are writing synchronous code and the example in the UI is in a synchronous context so it does not include the await.
I saw the source code at
https://github.com/PrefectHQ/prefect/blob/1dcd45637914896c60b7d49254a34e95a9ce56ea/src/prefect/blocks/core.py#L601-L604
#classmethod
#sync_compatible
#inject_client
async def load(cls, name: str, client: "OrionClient" = None):
# ...
So I think as long as the function has the decorator #sync_compatible, it means it can be used as both async and sync functions.
I'm having this case where I need to get from the coroutine about which task failed.
I'm using asyncio.wait and when a task fails with an exception, I cannot tell which arguments caused the task to fail.
I tried to read the coro cr_frame but it seems after the coro runs, the cr_frame returns None
I tried other things too like trying to edit the coro Class and trying to put a attribute
coro.__mydata = data but it seems that i cannot add attributes dynamically on the coro (maybe its a limitation on python, don't know)
Here's some code
async def main():
"""
Basically this function send messages to an api
The class takes things like channelID, userID etc
Sometimes channelID, userID are wrong and the task would return an exception
"""
resp = await asyncio.wait(self._queue_send_task)
for r in resp[0]:
try:
sent = r.result()
except:
## Exception because of wrong args, I need the args to act upon them
coro = r.get_coro()
coro.cr_frame ## Returns none, normally this would return a frame if I were to call it before the coro start
I'm trying to make a simple Slack bot using asyncio, largely using the example here for the asyncio part and here for the Slack bot part.
Both the examples work on their own, but when I put them together it seems my loop doesn't loop: it goes through once and then dies. If info is a list of length equal to 1, which happens when a message is typed in a chat room with the bot in it, the coroutine is supposed to be triggered, but it never is. (All the coroutine is trying to do right now is print the message, and if the message contains "/time", it gets the bot to print the time in the chat room it was asked in). Keyboard interrupt also doesn't work, I have to close the command prompt every time.
Here is my code:
import asyncio
from slackclient import SlackClient
import time, datetime as dt
token = "MY TOKEN"
sc = SlackClient(token)
#asyncio.coroutine
def read_text(info):
if 'text' in info[0]:
print(info[0]['text'])
if r'/time' in info[0]['text']:
print(info)
resp = 'The time is ' + dt.datetime.strftime(dt.datetime.now(),'%H:%M:%S')
print(resp)
chan = info[0]['channel']
sc.rtm_send_message(chan, resp)
loop = asyncio.get_event_loop()
try:
sc.rtm_connect()
info = sc.rtm_read()
if len(info) == 1:
asyncio.async(read_text(info))
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
print('step: loop.close()')
loop.close()
I think it's the loop part that's broken, since it never seems to get to the coroutine. So maybe a shorter way of asking this question is what is it about my try: statement that prevents it from looping like in the asyncio example I followed? Is there something about sc.rtm_connect() that it doesn't like?
I'm new to asyncio, so I'm probably doing something stupid. Is this even the best way to try and go about this? Ultimately I want the bot to do some things that take quite a while to compute, and I'd like it to remain responsive in that time, so I think I need to use asyncio or threads in some variety, but I'm open to better suggestions.
Thanks a lot,
Alex
I changed it to the following and it worked:
import asyncio
from slackclient import SlackClient
import time, datetime as dt
token = "MY TOKEN"
sc = SlackClient(token)
#asyncio.coroutine
def listen():
yield from asyncio.sleep(1)
x = sc.rtm_connect()
info = sc.rtm_read()
if len(info) == 1:
if 'text' in info[0]:
print(info[0]['text'])
if r'/time' in info[0]['text']:
print(info)
resp = 'The time is ' + dt.datetime.strftime(dt.datetime.now(),'%H:%M:%S')
print(resp)
chan = info[0]['channel']
sc.rtm_send_message(chan, resp)
asyncio.async(listen())
loop = asyncio.get_event_loop()
try:
asyncio.async(listen())
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
print('step: loop.close()')
loop.close()
Not entirely sure why that fixes it, but the key things I changed were putting the sc.rtm_connect() call in the coroutine and making it x = sc.rtm_connect(). I also call the listen() function from itself at the end, which appears to be what makes it loop forever, since the bot doesn't respond if I take it out. I don't know if this is the way this sort of thing is supposed to be set up, but it does appear to continue to accept commands while it's processing earlier commands, my slack chat looks like this:
me [12:21 AM]
/time
[12:21]
/time
[12:21]
/time
[12:21]
/time
testbotBOT [12:21 AM]
The time is 00:21:11
[12:21]
The time is 00:21:14
[12:21]
The time is 00:21:16
[12:21]
The time is 00:21:19
Note that it doesn't miss any of my /time requests, which it would if it weren't doing this stuff asynchronously. Also, if anyone is trying to replicate this you'll notice that slack brings up the built in command menu if you type "/". I got around this by typing a space in front.
Thanks for the help, please let me know if you know of a better way of doing this. It doesn't seem to be a very elegant solution, and the bot can't be restarted after I use the a cntrl-c keyboard interrupt to end it - it says
Task exception was never retrieved
future: <Task finished coro=<listen() done, defined at asynctest3.py:8> exception=AttributeError("'NoneType' object has no attribute 'recv'",)>
Traceback (most recent call last):
File "C:\Users\Dell-F5\AppData\Local\Programs\Python\Python35-32\Lib\asyncio\tasks.py", line 239, in _step
result = coro.send(None)
File "asynctest3.py", line 13, in listen
info = sc.rtm_read()
File "C:\Users\Dell-F5\Envs\sbot\lib\site-packages\slackclient\_client.py", line 39, in rtm_read
json_data = self.server.websocket_safe_read()
File "C:\Users\Dell-F5\Envs\sbot\lib\site-packages\slackclient\_server.py", line 110, in websocket_safe_read
data += "{0}\n".format(self.websocket.recv())
AttributeError: 'NoneType' object has no attribute 'recv'
Which I guess means it's not closing the websockets nicely. Anyway, that's just an annoyance, at least the main problem is fixed.
Alex
Making blocking IO calls inside a coroutine defeat the very purpose of using asyncio (e.g. info = sc.rtm_read()). If you don't have a choice, use loop.run_in_executor to run the blocking call in a different thread. Careful though, some extra locking might be needed.
However, it seems there's a few asyncio-based slack client libraries you could use instead:
slacker-asyncio - fork of slacker, based on aiohttp
butterfield - based on slacker and websockets
EDIT: Butterfield uses the Slack real-time messaging API. It even provides an echo bot example that looks very much like what you're trying to achieve:
import asyncio
from butterfield import Bot
#asyncio.coroutine
def echo(bot, message):
yield from bot.post(
message['channel'],
message['text']
)
bot = Bot('slack-bot-key')
bot.listen(echo)
butterfield.run(bot)