Python 3.6 - Sanic Motorengine - python

I am trying to setup the new Sanic web framework (which is promised to be extremely fast) with motorengine in order to achieve 100% async.
My setup so far:
app = Sanic(__name__)
#app.listener('after_server_start')
async def setup_dbconn(app, loop):
connect("database_name", username="user", password="pass", host="192.168.1.200", port=27017, io_loop=asyncio.get_event_loop())
Unfortunately I get:
motorengine.connection.ConnectionError: Cannot connect to database default :
Unknown option username
Why does this crash?

The specific problem you encounter is not about Sanic but has to do with motorengine.
Use the database URI to connect:
motorengine.connect(
db=name,
alias=alias,
host="mongodb://username:password#localhost:port",
io_loop=tornado.ioloop.IOLoop.instance()
)
The solution came from this ticket: https://github.com/heynemann/motorengine/issues/82.
PS: For Sanic >= 0.4.0 you should consider using sanic-motor extension!

Related

Difficulty accessing Google Text-to-Speech API from within Flask app

I am having difficulties accessing Google's texttospeech Python API from within a Flask app. The app is running in a container with Debian 10, Nginx/Gunicorn, and Python 3.7 installed. In the code below, the client connects successfully, but the synthesize_speech request hangs indefinitely (without any error message). When I run the same code from a Python script in the same container without Flask, the speech synthesis request is successful. However, I can call other external APIs, such as those at AWS, from my Flask app without any problems.
What could be causing this or how I could go about diagnosing the problem? I have tried switching to version 1.0.1 of the texttospeech library, but without success. Presumably the problem isn't my credentials, which I believe I have set up correctly, as otherwise the connection request wouldn't be successful.
from google.cloud import texttospeech
# Connect (this is successful)
client = texttospeech.TextToSpeechClient()
input_text = texttospeech.SynthesisInput(text="God dag")
voice_parameters = texttospeech.VoiceSelectionParams(
language_code="sv-SE",
name="sv-SE-Wavenet-A"
)
audio_config = texttospeech.AudioConfig(
audio_encoding=texttospeech.AudioEncoding.MP3
)
# Synthesize speech (this never completes)
response = client.synthesize_speech(
request={
"input": input_text,
"voice": voice_parameters,
"audio_config": audio_config
}
)
pip freeze
google-api-core==1.22.1
google-auth==1.20.1
google-cloud-texttospeech==2.2.0
googleapis-common-protos==1.52.0
grpcio==1.31.0
...
It turns out the issue was caused by monkey_patching in gunicorn's gevent worker class. I have managed to resolve the issue by changing the worker_class in gunicorn to "sync", as suggested on this page for a similar issue:
https://github.com/googleapis/google-cloud-python/issues/5848

How can we create asynchronous API in django?

I want to create a third party chatbot API which is asynchronous and replies "ok" after 10 seconds pause.
import time
def wait():
time.sleep(10)
return "ok"
# views.py
def api(request):
return wait()
I have tried celery for the same as follows where I am waiting for celery response in view itself:
import time
from celery import shared_task
#shared_task
def wait():
time.sleep(10)
return "ok"
# views.py
def api(request):
a = wait.delay()
work = AsyncResult(a.id)
while True:
if work.ready():
return work.get(timeout=1)
But this solution works synchronously and makes no difference. How can we make it asynchronous without asking our user to keep on requesting until the result is received?
As mentioned in #Blusky's answer:
The asynchronous API will exist in django 3.X. not before.
If this is not an option, then the answer is just no.
Please note as well, that even with django 3.X any django code, that accesses the database will not be asynchronous it had to be executed in a thread (thread pool)
Celery is intended for background tasks or deferred tasks, but celery will never return an HTTP response as it didn't receive the HTTP request to which it should respond to. Celery is also not asyncio friendly.
You might have to think of changing your architecture / implementation.
Look at your overall problem and ask yourself whether you really need an asynchronous API with Django.
Is this API intended for browser applications or for machine to machine applications?
Could your client's use web sockets and wait for the answer?
Could you separate blocking and non blocking parts on your server side?
Use django for everything non blocking, for everything periodic / deferred (django + celelry) and implement the asynchronous part with web server plugins or python ASGI code or web sockets.
Some ideas
Use Django + nginx nchan (if your web server is nginx)
Link to nchan: https://nchan.io/
your API call would create a task id, start a celery task, return immediately the task id or a polling url.
The polling URL would be handled for example via an nchan long polling channel.
your client connects to the url corresponding to an nchan long polling channel and celery deblocks it whenever you're task is finished (the 10s are over)
Use Django + an ASGI server + one handcoded view and use strategy similiar to nginx nchan
Same logic as above, but you don't use nginx nchan, but implement it yourself
Use an ASGI server + a non blocking framework (or just some hand coded ASGI views) for all blocking urls and Django for the rest.
They might exchange data via the data base, local files or via local http requests.
Just stay blocking and throw enough worker processes / threads at your server
This is probably the worst suggestion, but if it is just for personal use,
and you know how many requests you will have in parallel then just make sure you have enough Django workers, so that you can afford to be blocking.
I this case you would block an entire Django worker for each slow request.
Use websockets. e.g. with the channels module for Django
Websockets can be implemented with earlier versions of django (>= 2.2) with the django channels module (pip install channels) ( https://github.com/django/channels )
You need an ASGI server to server the asynchronous part. You could use for example Daphne ot uvicorn (The channel doc explains this rather well)
Addendum 2020-06-01: simple async example calling synchronous django code
Following code uses the starlette module as it seems quite simple and small
miniasyncio.py
import asyncio
import concurrent.futures
import os
import django
from starlette.applications import Starlette
from starlette.responses import Response
from starlette.routing import Route
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'pjt.settings')
django.setup()
from django_app.xxx import synchronous_func1
from django_app.xxx import synchronous_func2
executor = concurrent.futures.ThreadPoolExecutor(max_workers=2)
async def simple_slow(request):
""" simple function, that sleeps in an async matter """
await asyncio.sleep(5)
return Response('hello world')
async def call_slow_dj_funcs(request):
""" slow django code will be called in a thread pool """
loop = asyncio.get_running_loop()
future_func1 = executor.submit(synchronous_func1)
func1_result = future_func1.result()
future_func2 = executor.submit(synchronous_func2)
func2_result = future_func2.result()
response_txt = "OK"
return Response(response_txt, media_type="text/plain")
routes = [
Route("/simple", endpoint=simple_slow),
Route("/slow_dj_funcs", endpoint=call_slow_dj_funcs),
]
app = Starlette(debug=True, routes=routes)
you could for example run this code with
pip install uvicorn
uvicorn --port 8002 miniasyncio:app
then on your web server route these specific urls to uvicorn and not to your django application server.
The best option is to use the futur async API, which will be proposed on Django in 3.1 release (which is already available in alpha)
https://docs.djangoproject.com/en/dev/releases/3.1/#asynchronous-views-and-middleware-support
(however, you will need to use an ASGI Web Worker to make it work properly)
Checkout Django 3 ASGI (Asynchronous Server Gateway Interface) support:
https://docs.djangoproject.com/en/3.0/howto/deployment/asgi/

FastAPI websocket can not connect

I am trying to let my Vue.js app communicate with my FastAPI(based on starlette) local server using websockets. I tried using the exact same code as in their example: https://fastapi.tiangolo.com/tutorial/websockets/. However something weird happens, because my server can not start with the reason: AttributeError: 'FastAPI' object has no attribute 'websocket'. That is strange because this exact code is the official docs of FastAPI.
After that I used the Starlette example code: https://www.starlette.io/websockets/. However when I try to connect to it, the FastApi prints to the terminal: WARNING: Invalid HTTP request received.
I tried using another client, the Simple WebSocket Client: https://chrome.google.com/webstore/detail/simple-websocket-client/pfdhoblngboilpfeibdedpjgfnlcodoo, but the same error appears on the terminal.
What am I doing wrong here? In the first place I find it weird that the FastAPI code does not seem to work on my computer, does anyone know why?
Thanks in advance!
Apparently the WebSocket functionality was added in FastAPI 0.24, which was just released. I was using an older version.
run pip install websockets and configure it as following:
from fastapi import FastAPI, WebSocket
#app.websocket("/ws")
async def send_data(websocket:WebSocket):
print('CONNECTING...')
await websocket.accept()
while True:
try:
await websocket.receive_text()
resp = {
"message":"message from websocket"
}
await websocket.send_json(resp)
except Exception as e:
print(e)
break
print("CONNECTION DEAD...")

405 Client Error: Method Not Allowed for url: https://rinkeby.infura.io PYTHON

I Can't call create filter on my contract, this is my code below- I am able to get the contract and run functions on it, but I cant listen to events:
myContract = w3.eth.contract(address="some_address",abi=contract_abi)
This is where I run into issues:
myfilter_new= myContract.events.Transfer.createFilter(fromBlock=0, toBlock='latest')
this is the issue I get:
HTTPError at /testing/
405 Client Error: Method Not Allowed for url: https://rinkeby.infura.io/my_api_key
As #smarx said, INFURA does not supports filters over HTTP. They do have some support over sockets, but I believe it is not yet production ready. It's worth a try, though.
Web3.py has a built-in way to connect using websockets that you can initialize like so:
from web3 import Web3
w3 = Web3(Web3.WebsocketProvider("wss://mainnet.infura.io/ws"))

Flask streaming doesn't work on my machine

Related to this question.
I'm trying to use Flask streaming and having difficulty doing so, it seems my example works on any machine I try it on other than my own. I'm running Flask 0.10.1, Werkzeug 0.9.4, and Python 2.7.6. I have reinstalled Flask and Werkzeug with no effect. If someone could suggest a way to debug this I would be very grateful.
The problem I experience is that data doesnt get sent to the client while the stream is open, it is only sent once the stream is closed (eg in the example below when the generator function returns).
#!/usr/bin/env python
from flask import Flask, Response
from time import sleep
def stream():
n = 10
while n > 0:
yield "data: hi\n\n"
sleep(0.5)
n = n - 1
app = Flask(__name__)
#app.route("/events")
def streamSessionEvents():
return Response(
stream(),
mimetype="text/event-stream"
)
#...also has a route to output html/js client to consume the above
app.run(threaded=True)
Revisited this recently, it was indeed the virus scanner messing with my network traffic.
I'm using Sophos, and found that if I add the address of the machine hosting my Flask application to the virus scanner's "allowed sites" list then my SSE events are received correctly.

Categories

Resources