I 'm trying to make use of the channels project (http://channels.readthedocs.org/en/latest/index.html) on django.
While on the docs there is a good tutorial for building a Group-based websocket application(chat), I couldn't find something related to a simple push mechanism that will be client specific (so no need to use Group)
Let's say I want to build a feed aggregator with various news providers and when a user visits the homepage and waits for all the feeds to get parsed, I want to send him informational messages about which one is being parsed by the server, while he waits.
What I got now is:
consumers.py
from channels import Group, Channel
from .views import sort_articles_by_date
from .soup import ProviderParser
from .models import Provider
# Connected to websocket.connect and websocket.keepalive
def ws_add(message):
Group("news_providers_loading").add(message.reply_channel)
def ws_message(message):
providers = Provider.objects.all()
articles = []
for provider in providers:
Group("news_providers_loading").send({'content': str(provider)})
parser = ProviderParser(provider)
articles.extend(parser.parse_articles())
sort_articles_by_date(articles)
# Connected to websocket.disconnect
def ws_disconnect(message):
Group("news_providers_loading").discard(message.reply_channel)
routing.py
channel_routing = {
"websocket.connect": "news_providers.consumers.ws_add",
"websocket.keepalive": "news_providers.consumers.ws_add",
"websocket.receive": "news_providers.consumers.ws_message",
"websocket.disconnect": "news_providers.consumers.ws_disconnect",
}
Though it works ok, I can't help it but feel that's a bit overkill(?)
Is there a way to just make use of the Channel constructor, instead of Group?
Thanks :)
Update:
channels version = 0.9
channels are 0.9 now so some changes are required for the client to receive the message from the server:
class Content:
def __init__(self, reply_channel):
self.reply_channel = reply_channel
def send(self, json):
self.reply_channel.send({
'reply_channel': str(self.reply_channel),
'text': dumps(json)
})
def ws_message(message):
content = Content(message.reply_channel)
content.send({'hello': 'world'})
routing.py stays the same...
channels version < 0.9
Bah, it was a bit tricky but found it.
You have to use the message's reply_channel property.
So this:
Group("news_providers_loading").send({'content': str(provider)})
turns into this:
Channel(message.reply_channel).send({'content': str(provider)})
What I got now is:
from channels import Channel
from .soup import ProviderParser, sort_articles_by_date
from .models import Provider
from django.template.loader import render_to_string
from json import dumps
class Content:
def __init__(self, reply_channel):
self.reply_channel = reply_channel
def send(self, json):
Channel(self.reply_channel).send({'content': dumps(json)})
def ws_message(message):
providers = Provider.objects.all()
content = Content(message.reply_channel)
content.send({'providers_length': len(providers)})
articles = []
for provider in providers:
content.send({'provider': str(provider)})
parser = ProviderParser(provider)
articles.extend(parser.parse_articles())
sort_articles_by_date(articles)
html = render_to_string('news_providers/article.html', {'articles': articles})
content.send({'html': html})
routing.py
channel_routing = {
"websocket.receive": "news_providers.consumers.ws_message",
}
Seems lighter, though you might want to keep connect, keepalive and disconnect methods (as simple foo methods) -not entirely sure about that-!
# connect, keepalive and disconnect
def ws_foo(message):
pass
routing.py
channel_routing = {
"websocket.connect": "news_providers.consumers.ws_foo",
"websocket.keepalive": "news_providers.consumers.ws_foo",
"websocket.receive": "news_providers.consumers.ws_message",
"websocket.disconnect": "news_providers.consumers.ws_foo",
}
Related
I'm attempting to send consumers.py information to display on the client end outside of consumers.py.
I've referenced Send message using Django Channels from outside Consumer class this previous question, but the sub process .group_send or .group_add don't seem to exist, so I feel it's possible I'm missing something very easy.
Consumers.py
from channels.generic.websocket import WebsocketConsumer
from asgiref.sync import async_to_sync
class WSConsumer(WebsocketConsumer):
def connect(self):
async_to_sync(self.channel_layer.group_add)("appDel", self.channel_name)
self.accept()
self.render()
appAlarm.py
def appFunc(csvUpload):
#csvUpload=pd.read_csv(request.FILES['filename'])
csvFile = pd.DataFrame(csvUpload)
colUsernames = csvFile.usernames
print(colUsernames)
channel_layer = get_channel_layer()
for user in colUsernames:
req = r.get('https://reqres.in/api/users/2')
print(req)
t = req.json()
data = t['data']['email']
print(user + " " + data)
message = user + " " + data
async_to_sync(channel_layer.group_send)(
'appDel',
{'type': 'render', 'message': message}
)
It's throwing this error:
async_to_sync(channel_layer.group_send)(
AttributeError: 'NoneType' object has no attribute 'group_send'
and will throw the same error for group_add when stripping it back more to figure out what's going on, but per the documentation HERE I feel like this should be working.
To anyone looking at this in the future, I was not able to use redis or even memurai in Windows OS due to cost. I ended up using server side events (SSE), specifically django-eventstream, and so far it's worked great as I didn't need the client to interact with the server, for a chat application this would not work.
Eventstream creates an endpoint at /events/ the client can connect to and receive a streaming http response.
Sending data from externalFunc.py:
send_event('test', 'message', {'text': 'Hello World'})
Event listener in HTML page:
var es = new ReconnectingEventSource('/events/');
es.addEventListener('message', function (e) {
console.log(e.data);
var source = new EventSource("/events/")
var para = document.createElement("P");
const obj = JSON.parse(event.data)
para.innerText = obj.text;
document.body.appendChild(para)
}, false);
es.addEventListener('stream-reset', function (e) {
}, false);
I want to test my tornado python application with pytest.
for that purpose, I want to have a mock db for the mongo and to use motor "fake" client to simulate the calls to the mongodb.
I found alot of solution for pymongo but not for motor.
any idea?
I do not clearly understand your problem — why not just have hard-coded JSON data?
If you just want to have a class that would mock the following:
from motor.motor_tornado import MotorClient
client = MotorClient(MONGODB_URL)
my_db = client.my_db
result = await my_db['my_collection'].insert_one(my_json_load)
So I recommend creating a Class:
Class Collection():
database = []
async def insert_one(self,data):
database.append(data)
data['_id'] = "5063114bd386d8fadbd6b004" ## You may make it random or consequent
...
## Also, you may save the 'database' list to the pickle on disk to preserve data between runs
return data
async def find_one(self, data):
## Search in the list
return data
async def delete_one(self, data_id):
delete_one.deleted_count = 1
return
## Then create a collection:
my_db = {}
my_db['my_collecton'] = Collection()
### The following is the part of 'views.py'
from tornado.web import RequestHandler, authenticated
from tornado.escape import xhtml_escape
class UserHandler(RequestHandler):
async def post(self, name):
getusername = xhtml_escape(self.get_argument("user_variable"))
my_json_load = {'username':getusername}
result = await my_db['my_collection'].insert_one(my_json_load)
...
return self.write(result)
If you would clarify your question, I will develop my answer.
I use Python telegram bot API for my bot.
I want to generate photos locally and send them as inline results but InlineQueryResultPhoto accepts only photo URLs.
Suppose my project structure looks like this:
main.py
photo.jpg
How do I send photo.jpg as an inline result?
Here is the code of main.py:
from uuid import uuid4
from telegram.ext import InlineQueryHandler, Updater
from telegram import InlineQueryResultPhoto
def handle_inline_request(update, context):
update.inline_query.answer([
InlineQueryResultPhoto(
id=uuid4(),
photo_url='', # WHAT DO I PUT HERE?
thumb_url='', # AND HERE?
)
])
updater = Updater('TELEGRAM_TOKEN', use_context=True)
updater.dispatcher.add_handler(InlineQueryHandler(handle_inline_request))
updater.start_polling()
updater.idle()
There is no direct answer because Telegram Bot API doesn't provide it.
But there are two workaounds: you can use upload a photo to telegram servers and then use InlineQueryResultCachedPhoto or you can upload to any image server and then use InlineQueryResultPhoto.
InlineQueryResultCachedPhoto
This first option requires you to previously upload the photo to telegram servers before creating the result list. Which options do you have? The bot can message you the photo, get that information and use what you need. Another option is creating a private channel where your bot can post the photos it will reuse. The only detail of this method is getting to know the channel_id (How to obtain the chat_id of a private Telegram channel?).
Now lets see some code:
from config import tgtoken, privchannID
from uuid import uuid4
from telegram import Bot, InlineQueryResultCachedPhoto
bot = Bot(tgtoken)
def inlinecachedphoto(update, context):
query = update.inline_query.query
if query == "/CachedPhoto":
infophoto = bot.sendPhoto(chat_id=privchannID,photo=open('logo.png','rb'),caption="some caption")
thumbphoto = infophoto["photo"][0]["file_id"]
originalphoto = infophoto["photo"][-1]["file_id"]
results = [
InlineQueryResultCachedPhoto(
id=uuid4(),
title="CachedPhoto",
photo_file_id=originalphoto)
]
update.inline_query.answer(results)
when you send a photo to a chat/group/channel, you can obtain the file_id, the file_id of the thumbnail, the caption and other details I'm going to skip. What the problem? If you don't filter the right query, you may end up sending the photo multiple times to your private channel. It also means the autocomplete won't work.
InlineQueryResultPhoto
The other alternative is upload the photo to internet and then use the url. Excluding options like your own hosting, you can use some free image hostings that provides APIs (for example: imgur, imgbb). For this code, generating your own key in imgbb is simpler than imgur. Once generated:
import requests
import json
import base64
from uuid import uuid4
from config import tgtoken, key_imgbb
from telegram import InlineQueryResultPhoto
def uploadphoto():
with open("figure.jpg", "rb") as file:
url = "https://api.imgbb.com/1/upload"
payload = {
"key": key_imgbb,
"image": base64.b64encode(file.read()),
}
response = requests.post(url, payload)
if response.status_code == 200:
return {"photo_url":response.json()["data"]["url"], "thumb_url":response.json()["data"]["thumb"]["url"]}
return None
def inlinephoto(update, context):
query = update.inline_query.query
if query == "/URLPhoto":
upphoto = uploadphoto()
if upphoto:
results = [
InlineQueryResultPhoto(
id=uuid4(),
title="URLPhoto",
photo_url=upphoto["photo_url"],
thumb_url=upphoto["thumb_url"])
]
update.inline_query.answer(results)
This code is similar to the previous method (and that includes the same problems): uploading multiple times if you don't filter the query and you won't have the autocomplete when writing the inline.
Disclaimer
Both code were written thinking the images you want to upload are generated at the moment you receive the query, otherwise you can do the work previous to receiving the query, saving that info in a database.
Bonus
You can run your own bot to get the channel_id of your private channel with pyTelegramBotAPI
import telebot
bot = telebot.TeleBot(bottoken)
#bot.channel_post_handler(commands=["getchannelid"])
def chatid(message):
bot.reply_to(message,'channel_id = {!s}'.format(message.chat.id))
bot.polling()
To get the id you need to write in the channel /getchannelid#botname
I'm using the Open Tracing Python library for GRPC and am trying to build off of the example script here: https://github.com/opentracing-contrib/python-grpc/blob/master/examples/trivial/trivial_client.py.
Once I have sent a request through the intercepted channel, how do I find the trace-id value for the request? I want to use this to look at the traced data in the Jaeger UI.
I had missed a key piece of documentation. In order to get a trace ID, you have to create a span on the client side. This span will have the trace ID that can be used to examine data in the Jaeger UI. The span has to be added into the GRPC messages via an ActiveSpanSource instance.
# opentracing-related imports
from grpc_opentracing import open_tracing_client_interceptor, ActiveSpanSource
from grpc_opentracing.grpcext import intercept_channel
from jaeger_client import Config
# dummy class to hold span data for passing into GRPC channel
class FixedActiveSpanSource(ActiveSpanSource):
def __init__(self):
self.active_span = None
def get_active_span(self):
return self.active_span
config = Config(
config={
'sampler': {
'type': 'const',
'param': 1,
},
'logging': True,
},
service_name='foo')
tracer = config.initialize_tracer()
# ...
# In the method where GRPC requests are sent
# ...
active_span_source = FixedActiveSpanSource()
tracer_interceptor = open_tracing_client_interceptor(
tracer,
log_payloads=True,
active_span_source=active_span_source)
with tracer.start_span('span-foo') as span:
print(f"Created span: trace_id:{span.trace_id:x}, span_id:{span.span_id:x}, parent_id:{span.parent_id}, flags:{span.flags:x}")
# provide the span to the GRPC interceptor here
active_span_source.active_span = span
with grpc.insecure_channel(...) as channel:
channel = intercept_channel(channel, tracer_interceptor)
Of course, you could switch the ordering of the with statements so that the span is created after the GRPC channel. That part doesn't make any difference.
Correct me, if I'm wrong. If you mean how to find the trace-id on the server side, you can try to access the OpenTracing span by get_active_span. The trace-id, I suppose, should be one of the tags in it.
I'm currently trying to create a backend server to communicate with some clients with a websocket. The clients makes some request to the backend and the backend responds directly to the client through a consumer.
In addition, I've got an API that needs to send some requests to the client. It has to go through the opened socket of the consumer. I'm using Django Rest Framework for the API. So I've got 2 apps for now. One for the consumer and one for the API. I want to know if it's the correct way or not.
This is actually the code I'm thinking about right now:
# mybackendapp/consumers.py
class MyConsumer(AsyncWebsocketConsumer):
async def connect(self):
self.client_id = self.scope['url_route']['kwargs']['client_id']
# This line I don't get it very well. It comes from:
# [channels doc: single channels][1]
# I don't know if I should create the Clients myself or if it's
# created automatically
Clients.objects.create(channel_name=self.channel_name,
self.client_id)
self.accept()
async def disconnect(self):
Clients.objects.filter(channel_name=self.channel_name).delete()
async def receive(self, text_data):
self.recv_data = json.loads(text_data)
if self.recv_data[0] == CLIENT_REQUEST:
self.handler = ClientRequestHandler(self.client_id,
self.recv_data)
await self.handler.run()
self.sent_data = self.handler.response
self.send(self.sent_data)
elif self.recv_data[0] == CLIENT_RESPONSE:
self.handler = ClientResponseHandler(self.client_id,
self.recv_data)
channel_layer = get_channel_layer()
# Here I'm not sure but I may have several API requests so
# several row with the same client_id.
# I welcome info on how to deal with that.
api_channel_name = self.another_handler.ext_channel_name
channel_layer.send(api_channel_name, {
"text_data": self.handler.response,
})
async def message_from_api(self, event):
self.api_channel_name = event['channel_name_answer']
# this line is for hiding the fact that I'm only manipulating data
# to send it to a correct format to the socket
self.another_handler = ExternalDataHandler(event['json_data'])
query_to_client = another_handler.get_formatted_query()
self.send(query_to_client)
In receive, this consumer handles differently the messages from the client depending if it's initiated by the client or the rest API. You can see that with CLIENT_REQUEST and CLIENT_RESPONSE constants.
Now from the API view:
# myapi/views.py
from channels.layers import get_channel_layer
def my_api_view(request, client_id):
channel_layer = get_channel_layer()
if request.method == 'POST':
ext_request_data_json = request.data
client_channel_name = Clients.objects.get(
client_id=client_id).channel_name
# I don't know what type is listen_channel_name. I assume it's str
listen_channel_name = async_to_sync(channels_layers.new_channel)()
async_to_sync(channel_layer.send)(
client_channel_name, {
'type': 'message.from.api',
'json_data': ext_request_data_json,
'channel_name_answer': listen_channel_name
}
)
received_msg = channel_layer.receive(listen_channel_name)
I believe that this code should work. I want to know if it's the correct way to do it.
See djangochannelsrestframework as a possible alternative solution.
Django Channels Rest Framework provides a DRF like interface for building channels-v3 websocket consumers.