async post call in python 2.7? - python

i have a project on python 2.7. i need to make async post call to connect aws. i have an code for async in python 3.5 by using asyncio.but my code needs to work on python2.7 2.7.please guide me how to resolve this issue.
import asyncio
import json
from aiohttp import ClientSession
HEADERS = {'Content-type':'application/json'}
async def hello(url):
data = {"mac": 'mm','minor':3,'distance':1,'timestamp':4444,'uuid':'aa','rssi':1,'tx':34}
async with ClientSession() as session:
async with session.post(url,json=data) as response:
response = await response.read()
print(response)
loop = asyncio.get_event_loop()
url = "http://192.168.101.74:9090/api/postreader"
while True:
loop.run_until_complete(hello(url))

Try using gevent instead of asyncio?
http://www.gevent.org/
https://pypi.org/project/gevent/#downloads

Related

Asyncio not running Aiohttp requests in parallel

I want to run many HTTP requests in parallel using python.
I tried this module named aiohttp with asyncio.
import aiohttp
import asyncio
async def main():
async with aiohttp.ClientSession() as session:
for i in range(10):
async with session.get('https://httpbin.org/get') as response:
html = await response.text()
print('done' + str(i))
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
I expect it to execute all the requests in parallel, but they are executed one by one.
Although, I later solved this using threading, but I would like to know what's wrong with this?
You need to make the requests in a concurrent manner. Currently, you have a single task defined by main() and so the http requests are run in a serial manner for that task.
You could also consider using asyncio.run() if you are using Python version 3.7+ that abstracts out creation of event loop:
import aiohttp
import asyncio
async def getResponse(session, i):
async with session.get('https://httpbin.org/get') as response:
html = await response.text()
print('done' + str(i))
async def main():
async with aiohttp.ClientSession() as session:
tasks = [getResponse(session, i) for i in range(10)] # create list of tasks
await asyncio.gather(*tasks) # execute them in concurrent manner
asyncio.run(main())

How to read lines of a streaming api with aiohttp in Python?

I am trying to convert a Python HTTP client featuring requests to aiohttp. The logic is to send a GET call to a REST endpoint which streams data occasionally and print the lines it returns.
I have a code using requests with stream=True option and iter_lines, it works pretty fine:
import json
import requests
def main():
with requests.get('https://my-streaming-url.com', stream=True) as r:
if r.encoding is None:
r.encoding = 'utf-8'
for line in r.iter_lines(decode_unicode=True):
if line:
# Print each line emitted by the streaming api
print(json.loads(line))
if __name__ == '__main__':
main()
Now, I want to convert this logic to aiohttp streaming api and tried:
import asyncio
import aiohttp
import json
loop = asyncio.get_event_loop()
async def main():
r = aiohttp.request('get', 'https://my-streaming-url.com')
async for line in r.content:
print(json.loads(line))
if __name__ == '__main__':
loop.run_until_complete(connect_and_listen())
loop.close()
I get an error like:
... in connect_and_listen
async for line in r.content:
AttributeError: '_SessionRequestContextManager' object has no attribute 'content'
sys:1: RuntimeWarning: coroutine 'ClientSession._request' was never awaited
Unclosed client session
client_session: aiohttp.client.ClientSession object at 0x7fac6ec24310
I tried a few ways like removing loop.close() from main, removing async from the for loop, but none helped.
What am I missing here? How can I print a streaming api lines with aiohttp?
P.S: My Python version is 3.7.5
As throughout the documentation usage of ClientSession class is encouraged, I had this code also encapsulated a session like follows and it worked:
async def main():
async with aiohttp.ClientSession(raise_for_status=True) as session:
async with session.get(cashcog_stream_url) as r:
async for line in r.content:
Another point is loop.close() is apparently does not affect the way app works and can be removed.
Your missing the await keyword.
aiohttp.request is an async context manager. you should use it with an async with statement
async def main():
async with aiohttp.request('get', 'http://localhost:5000') as r:
async for line in r.content:
print(json.loads(line))

Async HTML Parse with Beautifulsoup4 in Python

I'm making a python web scraper script. I should do this using asyncio. So for Async HTTP request I use AioHTTP.
It's ok but when i'm trying to make a non-blocking app (await), the beautifulsoup4 will block application (because beautifulsoup4 dose't support async)
This is what i'm tried.
import asyncio, aiohttp
from bs4 import BeautifulSoup
async def extractLinks(html):
soup = BeautifulSoup(html, 'html.parser')
return soup.select(".c-pro-box__title a")
async def getHtml(session, url):
async with session.get(url) as response:
return await response.text()
async def loadPage(url):
async with aiohttp.ClientSession() as session:
html = await getHtml(session, url)
links = await extractLinks(html)
return links
loop = asyncio.get_event_loop()
loop.run_until_complete(loadPage())
The extractLinks() will block program flow.
So is this possible to make it non-blocking? Or is there any library except beautifulsoup4 that support async as well as possible?

Can someone help to explain why the python aiohttp return more response content than requests.get?

Recently, I'm looking at the python aiohttp lib, play around it, compare with python requests. Here is the code:
import aiohttp
import asyncio
import requests
request_url = 'http://www.baidu.com'
requests_resp = requests.get(request_url)
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
async def main():
async with aiohttp.ClientSession() as session:
aio_resp = await fetch(session, request_url)
print('aio_resp_length =', len(aio_resp))
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
print('requests_resp_length = ', len(requests_resp.text))
The response lengths with a huge diffferences
aio_resp_length = 152576
requests_resp_length = 2381
Not sure what happens in aiohttp.session.get, but this result is not always like this. When you change the requests_url to http://www.example.com,the
response lengthes are the same. Can someone tell me what happened here?
Cheers
Because aiohttp has newline in it's response and requests doesn't.
you can check thier response like this
print('requests_resp_length = ', requests_resp.text[0:100])
print('aio_resp_length =', aio_resp[0:100])

Using Requests library to make asynchronous requests with Python 3.7

I need to make asynchronous requests using the Requests library. In Python 3.7 if I try from requests import async I get SyntaxError: invalid syntax.
async has become a reserved with in Python 3.7. How to I get around this situation?
Lukasa who is with the requests lib said:
At the current time there are no plans to support async and await. This is not because they aren't a good idea: they are. It's because to use them requires quite substantial code changes.
Right now requests is a purely synchronous library that, at the bottom of its stack, uses httplib to send and receive data. We cannot move to an async model unless we replace httplib. The best we could do is provide a shorthand to run a request in a thread, but asyncio already has just such a shorthand, so I don't believe it would be valuable.
Right now I am quietly looking at whether we can rewrite requests to work just as well in a synchronous environment as in an async one. However, the reality is that doing so will be a lot of work, involving rewriting a lot of our stack, and may not happen for many years, if ever.
But don't worry aiohttp is very similar to requests.
Here's an example.
import aiohttp
import asyncio
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
async def main():
async with aiohttp.ClientSession() as session:
html = await fetch(session, 'http://python.org')
print(html)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
You can use asyncio to make asynchronous requests. Here is an example:
import asyncio
import requests
async def main():
loop = asyncio.get_event_loop()
futures = [
loop.run_in_executor(
None,
requests.get,
'http://example.org/'
)
for i in range(20)
]
for response in await asyncio.gather(*futures):
pass
loop = asyncio.get_event_loop()
loop.run_until_complete(main())

Categories

Resources