I'm newbie in the world of python and have been trying to solve the following for last 3 days on my own. I read many articles online none of them address my problem or I seem to been missing something so I decided to post my question here.
Purpose:-
I'm trying to connect realtime data from my broker to a charting library via 'socketio python' websocket which I host locally and runs on an ASGI server. Broker has provided following two sync functions which I've wrapped inside an async fuction SubAdd but couldn't get an output from
#sio.event
async def SubAdd(sid,data): #event that handles subscribe requests from library for realtime data
def socket(access_token): #This funtion gets the data from broker
data_type = "symbolData"
symbol =["NSE:NIFTYBANK-INDEX"]
fs = ws.FyersSocket(access_token=access_token,run_background=False,log_path="/home/log/")
fs.websocket_data = custom_message
fs.subscribe(symbol=symbol,data_type=data_type)
fs.keep_running()
def custom_message(msg):# Function that returns fetched data
print (f"Custom:{msg}")
socket(access_token)
A thing that confused me the most is at the line 4 of socket i.e.fs.websocket_data = custom_message. Normally, if we write A=B then left side of assignment operator gets assigned the value of right side i.e. A is getting value from B But something else is happening here I dont know what?
Thing I've tried:
making both socket and custome_message an async fuction and then yielding sio.emit from inside async custom_message
making both socket and custome_message an async fuction and then awiting/yielding to the append msg to another list and then using async for on that list and then await sio.emit
Both of above gave an error RuntimeWarning: coroutine 'SubAdd.<locals>.socket' was never awaited which clearly means I'm heading in the wrong direction.
So my question is how do I wrap these two sync functions inside an async function and get an awaitable output from custom_message. If you could directly answer the question its well and good but even if you point me to the a link to a resource or even a keyword to search, which you think would answer my question is greatly appreciated. Thank you.
Related
I appreciate that the question I am about to ask is rather broad but, as a newcomer to Python, I am struggling to find the [best] way of doing something which would be trivial in, say, Node.js, and pretty trivial in other environments such as C#.
Let's say that there is a warehouse full of stuff. And let's say that there is a websocket interface onto that warehouse with two characteristics: on client connection it pumps out a full list of the warehouse's current inventory, and it then follows that up with further streaming updates when the inventory changes.
The web is full of examples of how, in Python, you connect to the warehouse and respond to changes in its state. But...
What if I want to connect to two warehouses and do something based on the combined information retrieved separately from each one? And what if I want to do things based on factors such as time, rather than solely being driven by inventory changes and incoming websocket messages?
In all the examples I've seen - and it's beginning to feel like hundreds - there is, somewhere, in some form, a run() or a run_forever() or a run_until_complete() etc. In other words, the I/O may be asynchronous, but there is always a massive blocking operation in the code, and always two fundamental assumptions which don't fit my case: that there will only be one websocket connection, and that all processing will be driven by events sent out by the [single] websocket server.
It's very unclear to me whether the answer to my question is some sort of use of multiple event loops, or of multiple threads, or something else.
To date, experimenting with Python has felt rather like being on the penthouse floor, admiring the quirky but undeniably elegant decor. But then you get in the elevator, press the button marked "parallelism" or "concurrency", and the evelator goes into freefall, eventually depositing you in a basement filled with some pretty ugly and steaming pipes.
... Returning from flowery metaphors back to the technical, the key thing I'm struggling with is the Python equivalent of, say, Node.js code which could be as trivially simple as the following example [left inelegant for simplicity]:
var aggregateState = { ... some sort of representation of combined state ... };
var socket1 = new WebSocket("wss://warehouse1");
socket1.on("message", OnUpdateFromWarehouse);
var socket2 = new WebSocket("wss://warehouse2");
socket2.on("message", OnUpdateFromWarehouse);
function OnUpdateFromWarehouse(message)
{
... Take the information and use it to update aggregate state from both warehouses ...
}
Answering my own question, in the hope that it may help other Python newcomers... asyncio seems to be the way to go (though there are gotchas such as the alarming ease with which you can deadlock the event loop).
Assuming the use of an asyncio-friendly websocket module such as websockets, what seems to work is a framework along the following lines - shorn, for simplicity, of logic such as reconnects. (The premise remains a warehouse which sends an initial list of its full inventory, and then sends updates to that initial state.)
class Warehouse:
def __init__(self, warehouse_url):
self.warehouse_url = warehouse_url
self.inventory = {} # Some description of the warehouse's inventory
async def destroy():
if (self.websocket.open):
self.websocket.close() # Terminates any recv() in wait_for_incoming()
await self.incoming_message_task # keep asyncio happy by awaiting the "background" task
async def start(self):
try:
# Connect to the warehouse
self.websocket = await connect(self.warehouse_url)
# Get its initial message which describes its full state
initial_inventory = await self.websocket.recv()
# Store the initial inventory
process_initial_inventory(initial_inventory)
# Set up a "background" task for further streaming reads of the web socket
self.incoming_message_task = asyncio.create_task(self.wait_for_incoming())
# Done
return True
except:
# Connection failed (or some unexpected error)
return False
async def wait_for_incoming(self):
while self.websocket.open:
try:
update_message = await self.websocket.recv()
asyncio.create_task(self.process_update_message(update_message))
except:
# Presumably, socket closure
pass
def process_initial_inventory(self, initial_inventory_message):
... Process initial_inventory_message into self.inventory ...
async def process_update_message(self, update_message):
... Merge update_message into self.inventory ...
... And fire some sort of event so that the object's
... creator can detect the change. There seems to be no ...
... consensus about what is a pythonic way of implementing events, ...
... so I'll declare that - potentially trivial - element as out-of-scope ...
After completing the initial connection logic, one key thing is setting up a "background" task which repeatedly reads further update messages coming in over the websocket. The code above doesn't include any firing of events, but there are all sorts of ways in which process_update_message() can/could do this (many of them trivially simple), allowing the object's creator to deal with notifications whenever and however it sees fit. The streaming messages will continue to be received, and any events will be continued to be fired, for as long as the object's creator continues to play nicely with asyncio and to participate in co-operative multitasking.
With that in place, a connection can be established along the following lines:
async def main():
warehouse1 = Warehouse("wss://warehouse1")
if await warehouse1.start():
... Connection succeeded. Update messages will now be processed
in the "background" provided that other users of the event loop
yield in some way ...
else:
... Connection failed ...
asyncio.run(main())
Multiple warehouses can be initiated in several ways, including doing a create_task(warehouse.start()) on each one and then doing a gather on the tasks to ensure/check that they're all okay.
When it's time to quit, to keep asyncio happy, and to stop it complaining about orphaned tasks, and to allow everything to shut down nicely, it's necessary to call destroy() on each warehouse.
But there's one common element which this doesn't cover. Extending the original premise above, let's say that the warehouse also accepts requests from our websocket client, such as "ship X to Y". The success/failure responses to these requests will come in alongside the general update messages; it generally won't be possible to guarantee that the first recv() after the send() of a request will be the response to that request. This complicates process_update_message().
The best answer I've found may or may not be considered "pythonic" because it uses a Future in a way which is strongly analogous to a TaskCompletionSource in .NET.
Let's invent a couple of implementation details; any real-world scenario is likely to look something like this:
We can supply a request_id when submitting an instruction to the warehouse
The success/failure response from the warehouse repeats the request_id back to us (and thus also distinguishing between command-response messages versus inventory-update messages)
The first step is to have a dictionary which maps the ID of pending, in-progress requests to Future objects:
def __init__(self, warehouse_url):
...
self.pending_requests = {}
The definition of a coroutine which sends a request then looks something like this:
async def send_request(self, some_request_definition)
# Allocate a unique ID for the request
request_id = <some unique request id>
# Create a Future for the pending request
request_future = asyncio.Future()
# Store the map of the ID -> Future in the dictionary of pending requests
self.pending_requests[request_id] = request_future
# Build a request message to send to the server, somehow including the request_id
request_msg = <some request definition, including the request_id>
# Send the message
await self.websocket.send(request_msg)
# Wait for the future to complete - we're now asynchronously awaiting
# activity in a separate function
await asyncio.wait_for(command_future, timeout = None)
# Return the result of the Future as the return value of send_request()
return request_future.result()
A caller can create a request and wait for its asynchronous response using something like the following:
some_result = await warehouse.send_request(<some request def>)
The key to making this all work is then to modify and extend process_update_message() to do the following:
Distinguish between request responses versus inventory updates
For the former, extract the request ID (which our invented scenario says gets repeated back to us)
Look up the pending Future for the request
Do a set_result() on it (whose value can be anything depending on what the server's response says). This releases send_request() and causes the await from it to be resolved.
For example:
async def process_update_message(self, update_message):
if <some test that update_message is a request response>:
request_id = <extract the request ID repeated back in update_message>
# Get the Future for this request ID
request_future = self.pending_requests[request_id]
# Create some sort of return value for send_request() based on the response
return_value = <some result of the request>
# Complete the Future, causing send_request() to return
request_future.set_result(return_value)
else:
... handle inventory updates as before ...
I've not used sockets with asyncio, but you're likely just looking for asyncio's open_connection
async def socket_activity(address, callback):
reader, _ = await asyncio.open_connection(address)
while True:
message = await reader.read()
if not message: # empty bytes on EOF
break # connection was closed
await callback(message)
Then add these to the event loop
tasks = [] # keeping a reference prevents these from being garbage collected
for address in ["wss://warehouse1", "wss://warehouse2"]:
tasks.append(asyncio.create_task(
socket_activity(address, callback)
))
# return tasks # or work with them
If you want to wait in a coroutine until N operations are complete, you can use .gather()
Alternatively, you may find Tornado does everything you want and more (I based my Answer off this one)
Tornado websocket client: how to async on_message? (coroutine was never awaited)
Is there a Function that runs before the bot gets closed
Ex.
#bot.event()
async def on_close(ctx):
export_files()
I'm making a bot that reads all the new messages and adds them to the author's list of words and when the command .get_word_count gets called all of the words that the author has sent will be shown
im: 98
under: 1
the: 1
water: 1
please: 1
help: 1
me: 1
test: 124136624745687697698608
the reason I'm storing the data is that it's more efficient to store and start a new read rather than going through all of the channels and get the word counts
The on_disconnect event. However, do note that it might trigger when no connection was already established, if establishing one fails.
Hoewever I haven't really understood what your use case is and what you need on_disconnect for. There might be a better way.
The best way I have found yet is to subclass commands.Bot and override the close method
class MyBot(commands.Bot):
async def close(self):
pass # you can also use attributes (even custom ones) using self.attr
bot = MyBot(command_prefix="!")
This would be well suited if you need a coroutine
I am new to pyhton APIs. I can not get the script to return a value. Could anyone give me a direction please. I can not get the lambda function to work properly. I am trying to save the streamed data into variables to use with a set of operations.
from tda.auth import easy_client
from tda.client import Client
from tda.streaming import StreamClient
import asyncio
import json
import config
import pathlib
import math
import pandas as pd
client = easy_client(
api_key=config.API_KEY,
redirect_uri=config.REDIRECT_URI,
token_path=config.TOKEN_PATH)
stream_client = StreamClient(client, account_id=config.ACCOUNT_ID)
async def read_stream():
login = asyncio.create_task(stream_client.login())
await login
service = asyncio.create_task(stream_client.quality_of_service(StreamClient.QOSLevel.EXPRESS))
await service
book_snapshots = {}
def my_nasdaq_book_handler(msg):
book_snapshots.update(msg)
stream_client.add_nasdaq_book_handler(my_nasdaq_book_handler)
stream = stream_client.nasdaq_book_subs(['GOOG','AAPL','FB'])
await stream
while True:
await stream_client.handle_message()
print(book_snapshots)
asyncio.run(read_stream())
Callbacks
This (wrong) assumption
stream_client.add_nasdaq_book_handler() contains all the trade data.
shows difficulties in understanding the callback concept. Typically the naming pattern add handler indicates that this concept is being used. There is also the comment in the boiler plate code from the Streaming Client docs
# Always add handlers before subscribing because many streams start sending
# data immediately after success, and messages with no handlers are dropped.
that consistently talks about subscribing - also this word is a strong indicator.
The basic principle of a callback is that instead you pull the information from a service (and being blocked until it's available), you enable that service to push that information to you when it's available. You do this typically be first registering one (or more) interest(s) with the service and after then wait for the things to come.
In section Handling Messages they give an example for function (to provide by you) as follows:
def sample_handler(msg):
print(json.dumps(msg, indent=4))
which takes a str argument which is dumped in JSON format to the console. The lambda in your example does exactly the same.
Lambdas
it's not possible to return a value from a lambda function because it is anonymous
This is not correct. If lambda functions wouldn't be able to return values, they wouldn't play such an important role. See 4.7.6. Lambda Expressions in the Python 3 docs.
The problem in your case is that both functions don't do anything you want, both just print to console. Now you need to get into these functions to tell what to do.
Control
Actually, your program runs within this loop
while True:
await stream_client.handle_message()
each stream_client.handle_message() call finally causes a call to the function you registered by calling stream_client.add_nasdaq_book_handler. So that's the point: your script defines what to do when messages arrive before it gets waiting.
For example, your function could just collect the arriving messages:
book_snapshots = []
def my_nasdaq_book_handler(msg):
book_snapshots.append(msg)
A global object book_snapshots is used in the implementation. You may expand/change this function at will (of course translating the information into JSON format will help you accessing it in a structured way). This line will register your function:
stream_client.add_nasdaq_book_handler(my_nasdaq_book_handler)
I'm trying to access variables that are being passed from the client (iOS; Swift) to the server on a Flask-SocketIO connection on the connect action. Let me explain. When you want to do a random action you have something like this on the server which includes a callback (see data in the code below):
#socketio.on('custom action', namespace = '/mynamespace')
def handle_custom_action(data):
print data
There are some preset actions (like connect) and apparently connect does not have any callback when it's called so the client cannot send any data on the connect action:
#socketio.on('connect', namespace = '/mynamespace')
def handle_connection(data):
print data # nothing gets printed
I looked into the code a bit deeper and found this. The definition of the on function is:
def on(self, message, namespace=None):
And then within that function (I'm omitting a bit of code to get to the point):
if message == 'connect':
ret = handler()
else:
ret = handler(*args)
I could be wrong but it appears that code explicitly does not return anything back on connect and I'm not sure why? I've found some evidence that this is possible in node.js (I will update this with proper links when I find them) so I'm wondering why this isn't possible in the Flask-SocketIO library or whether I'm just misunderstanding what I'm looking at (and if so, how to get those parameters).
Thanks!
Update:
I did find a way to access connection parameters but it doesn't seem like the 'right' way. I'm using the global request and splitting the GET parameters / query string that come through on the request:
data = dict(item.split("=") for item in request.event["args"][0]["QUERY_STRING"].split("&"))
OR as two lines:
data = request.event["args"][0]["QUERY_STRING"].split("&"))
data = dict(item.split("=") for item in data.split("&"))
Flask-SocketIO adds event which connects a dictionary with keys of message and args and within args is the QUERY_STRING which I then split add turn into a dictionary. This works fine but it doesn't necessarily answer the original question as to why there is no callback?
Here is an example of the iOS connection params being passed:
let connectParams = SocketIOClientOption.connectParams(["user_id" : Int(user.userId)!, "connection_id" : self.socketConnectionId])
self.socket = SocketIOClient(socketURL: URL(string: "http://www.myurl.com")!, config: [.nsp("/namespace"), .forceWebsockets(true), .forceNew(true), connectParams])
I'm trying to connect to a TeamSpeak server using the QueryServer to make a bot. I've taken advice from this thread, however I still need help.
This is The TeamSpeak API that I'm using.
Before the edits, this was the summary of what actually happened in my script (1 connection):
It connects.
It checks for channel ID (and it's own client ID)
It joins the channel and starts reading everything
If someone says an specific command, it executes the command and then it disconnects.
How can I make it so it doesn't disconnect? How can I make the script stay in a "waiting" state so it can keep reading after the command is executed?
I am using Python 3.4.1.
I tried learning Threading but either I'm dumb or it doesn't work the way I thought it would. There's another "bug", once waiting for events, if I don't trigger anything with a command, it disconnects after 60 seconds.
#Librerias
import ts3
import threading
import datetime
from random import choice, sample
# Data needed #
USER = "thisisafakename"
PASS = "something"
HOST = "111.111.111.111"
PORT = 10011
SID = 1
class BotPrincipal:
def __init__(self, manejador=False):
self.ts3conn = ts3.query.TS3Connection(HOST, PORT)
self.ts3conn.login(client_login_name=USER, client_login_password=PASS)
self.ts3conn.use(sid=SID)
channelToJoin = Bot.GettingChannelID("TestingBot")
try: #Login with a client that is ok
self.ts3conn.clientupdate(client_nickname="The Reader Bot")
self.MyData = self.GettingMyData()
self.MoveUserToChannel(ChannelToJoin, Bot.MyData["client_id"])
self.suscribirEvento("textchannel", ChannelToJoin)
self.ts3conn.on_event = self.manejadorDeEventos
self.ts3conn.recv_in_thread()
except ts3.query.TS3QueryError: #Name already exists, 2nd client connect with this info
self.ts3conn.clientupdate(client_nickname="The Writer Bot")
self.MyData = self.GettingMyData()
self.MoveUserToChannel(ChannelToJoin, Bot.MyData["client_id"])
def __del__(self):
self.ts3conn.close()
def GettingMyData(self):
respuesta = self.ts3conn.whoami()
return respuesta.parsed[0]
def GettingChannelID(self, nombre):
respuesta = self.ts3conn.channelfind(pattern=ts3.escape.TS3Escape.unescape(nombre))
return respuesta.parsed[0]["cid"]
def MoveUserToChannel(self, idCanal, idUsuario, passCanal=None):
self.ts3conn.clientmove(cid=idCanal, clid=idUsuario, cpw=passCanal)
def suscribirEvento(self, tipoEvento, idCanal):
self.ts3conn.servernotifyregister(event=tipoEvento, id_=idCanal)
def SendTextToChannel(self, idCanal, mensajito="Error"):
self.ts3conn.sendtextmessage(targetmode=2, target=idCanal, msg=mensajito) #This works
print("test") #PROBLEM HERE This doesn't work. Why? the line above did work
def manejadorDeEventos(sender, event):
message = event.parsed[0]['msg']
if "test" in message: #This works
Bot.SendTextToChannel(ChannelToJoin, "This is a test") #This works
if __name__ == "__main__":
Bot = BotPrincipal()
threadprincipal = threading.Thread(target=Bot.__init__)
threadprincipal.start()
Prior to using 2 bots, I tested to launch the SendTextToChannel when it connects and it works perfectly, allowing me to do anything that I want after it sends the text to the channel. The bug that made entire python code stop only happens if it's triggered by the manejadorDeEventos
Edit 1 - Experimenting with threading.
I messed it up big time with threading, getting to the result where 2 clients connect at same time. Somehow i think 1 of them is reading the events and the other one is answering. The script doesn't close itself anymore and that's a win, but having a clone connection doesn't looks good.
Edit 2 - Updated code and actual state of the problem.
I managed to make the double connection works more or less "fine", but it disconnects if nothing happens in the room for 60 seconds. Tried using Threading.timer but I'm unable to make it works. The entire question code has been updated for it.
I would like an answer that helps me to do both reading from the channel and answering to it without the need of connect a second bot for it (like it's actually doing...) And I would give extra points if the answer also helps me to understand an easy way to make a query to the server each 50 seconds so it doesn't disconnects.
From looking at the source, recv_in_thread doesn't create a thread that loops around receiving messages until quit time, it creates a thread that receives a single message and then exits:
def recv_in_thread(self):
"""
Calls :meth:`recv` in a thread. This is useful,
if you used ``servernotifyregister`` and you expect to receive events.
"""
thread = threading.Thread(target=self.recv, args=(True,))
thread.start()
return None
That implies that you have to repeatedly call recv_in_thread, not just call it once.
I'm not sure exactly where to do so from reading the docs, but presumably it's at the end of whatever callback gets triggered by a received event; I think that's your manejadorDeEventos method? (Or maybe it's something related to the servernotifyregister method? I'm not sure what servernotifyregister is for and what on_event is for…)
That manejadorDeEventos brings up two side points:
You've declared manejadorDeEventos wrong. Every method has to take self as its first parameter. When you pass a bound method, like self.manejadorDeEventos, that bound self object is going to be passed as the first argument, before any arguments that the caller passes. (There are exceptions to this for classmethods and staticmethods, but those don't apply here.) Also, within that method, you should almost certainly be accessing self, not a global variable Bot that happens to be the same object as self.
If manejadorDeEventos is actually the callback for recv_in_thread, you've got a race condition here: if the first message comes in before your main threads finishes the on_event assignment, the recv_on_thread won't be able to call your event handler. (This is exactly the kind of bug that often shows up one time in a million, making it a huge pain to debug when you discover it months after deploying or publishing your code.) So, reverse those two lines.
One last thing: a brief glimpse at this library's code is a bit worrisome. It doesn't look like it's written by someone who really knows what they're doing. The method I copied above only has 3 lines of code, but it includes a useless return None and a leaked Thread that can never be joined, not to mention that the whole design of making you call this method (and spawn a new thread) after each event received is weird, and even more so given that it's not really explained. If this is the standard client library for a service you have to use, then you really don't have much choice in the matter, but if it's not, I'd consider looking for a different library.