I'm trying to access variables that are being passed from the client (iOS; Swift) to the server on a Flask-SocketIO connection on the connect action. Let me explain. When you want to do a random action you have something like this on the server which includes a callback (see data in the code below):
#socketio.on('custom action', namespace = '/mynamespace')
def handle_custom_action(data):
print data
There are some preset actions (like connect) and apparently connect does not have any callback when it's called so the client cannot send any data on the connect action:
#socketio.on('connect', namespace = '/mynamespace')
def handle_connection(data):
print data # nothing gets printed
I looked into the code a bit deeper and found this. The definition of the on function is:
def on(self, message, namespace=None):
And then within that function (I'm omitting a bit of code to get to the point):
if message == 'connect':
ret = handler()
else:
ret = handler(*args)
I could be wrong but it appears that code explicitly does not return anything back on connect and I'm not sure why? I've found some evidence that this is possible in node.js (I will update this with proper links when I find them) so I'm wondering why this isn't possible in the Flask-SocketIO library or whether I'm just misunderstanding what I'm looking at (and if so, how to get those parameters).
Thanks!
Update:
I did find a way to access connection parameters but it doesn't seem like the 'right' way. I'm using the global request and splitting the GET parameters / query string that come through on the request:
data = dict(item.split("=") for item in request.event["args"][0]["QUERY_STRING"].split("&"))
OR as two lines:
data = request.event["args"][0]["QUERY_STRING"].split("&"))
data = dict(item.split("=") for item in data.split("&"))
Flask-SocketIO adds event which connects a dictionary with keys of message and args and within args is the QUERY_STRING which I then split add turn into a dictionary. This works fine but it doesn't necessarily answer the original question as to why there is no callback?
Here is an example of the iOS connection params being passed:
let connectParams = SocketIOClientOption.connectParams(["user_id" : Int(user.userId)!, "connection_id" : self.socketConnectionId])
self.socket = SocketIOClient(socketURL: URL(string: "http://www.myurl.com")!, config: [.nsp("/namespace"), .forceWebsockets(true), .forceNew(true), connectParams])
Related
I'm newbie in the world of python and have been trying to solve the following for last 3 days on my own. I read many articles online none of them address my problem or I seem to been missing something so I decided to post my question here.
Purpose:-
I'm trying to connect realtime data from my broker to a charting library via 'socketio python' websocket which I host locally and runs on an ASGI server. Broker has provided following two sync functions which I've wrapped inside an async fuction SubAdd but couldn't get an output from
#sio.event
async def SubAdd(sid,data): #event that handles subscribe requests from library for realtime data
def socket(access_token): #This funtion gets the data from broker
data_type = "symbolData"
symbol =["NSE:NIFTYBANK-INDEX"]
fs = ws.FyersSocket(access_token=access_token,run_background=False,log_path="/home/log/")
fs.websocket_data = custom_message
fs.subscribe(symbol=symbol,data_type=data_type)
fs.keep_running()
def custom_message(msg):# Function that returns fetched data
print (f"Custom:{msg}")
socket(access_token)
A thing that confused me the most is at the line 4 of socket i.e.fs.websocket_data = custom_message. Normally, if we write A=B then left side of assignment operator gets assigned the value of right side i.e. A is getting value from B But something else is happening here I dont know what?
Thing I've tried:
making both socket and custome_message an async fuction and then yielding sio.emit from inside async custom_message
making both socket and custome_message an async fuction and then awiting/yielding to the append msg to another list and then using async for on that list and then await sio.emit
Both of above gave an error RuntimeWarning: coroutine 'SubAdd.<locals>.socket' was never awaited which clearly means I'm heading in the wrong direction.
So my question is how do I wrap these two sync functions inside an async function and get an awaitable output from custom_message. If you could directly answer the question its well and good but even if you point me to the a link to a resource or even a keyword to search, which you think would answer my question is greatly appreciated. Thank you.
I am new to pyhton APIs. I can not get the script to return a value. Could anyone give me a direction please. I can not get the lambda function to work properly. I am trying to save the streamed data into variables to use with a set of operations.
from tda.auth import easy_client
from tda.client import Client
from tda.streaming import StreamClient
import asyncio
import json
import config
import pathlib
import math
import pandas as pd
client = easy_client(
api_key=config.API_KEY,
redirect_uri=config.REDIRECT_URI,
token_path=config.TOKEN_PATH)
stream_client = StreamClient(client, account_id=config.ACCOUNT_ID)
async def read_stream():
login = asyncio.create_task(stream_client.login())
await login
service = asyncio.create_task(stream_client.quality_of_service(StreamClient.QOSLevel.EXPRESS))
await service
book_snapshots = {}
def my_nasdaq_book_handler(msg):
book_snapshots.update(msg)
stream_client.add_nasdaq_book_handler(my_nasdaq_book_handler)
stream = stream_client.nasdaq_book_subs(['GOOG','AAPL','FB'])
await stream
while True:
await stream_client.handle_message()
print(book_snapshots)
asyncio.run(read_stream())
Callbacks
This (wrong) assumption
stream_client.add_nasdaq_book_handler() contains all the trade data.
shows difficulties in understanding the callback concept. Typically the naming pattern add handler indicates that this concept is being used. There is also the comment in the boiler plate code from the Streaming Client docs
# Always add handlers before subscribing because many streams start sending
# data immediately after success, and messages with no handlers are dropped.
that consistently talks about subscribing - also this word is a strong indicator.
The basic principle of a callback is that instead you pull the information from a service (and being blocked until it's available), you enable that service to push that information to you when it's available. You do this typically be first registering one (or more) interest(s) with the service and after then wait for the things to come.
In section Handling Messages they give an example for function (to provide by you) as follows:
def sample_handler(msg):
print(json.dumps(msg, indent=4))
which takes a str argument which is dumped in JSON format to the console. The lambda in your example does exactly the same.
Lambdas
it's not possible to return a value from a lambda function because it is anonymous
This is not correct. If lambda functions wouldn't be able to return values, they wouldn't play such an important role. See 4.7.6. Lambda Expressions in the Python 3 docs.
The problem in your case is that both functions don't do anything you want, both just print to console. Now you need to get into these functions to tell what to do.
Control
Actually, your program runs within this loop
while True:
await stream_client.handle_message()
each stream_client.handle_message() call finally causes a call to the function you registered by calling stream_client.add_nasdaq_book_handler. So that's the point: your script defines what to do when messages arrive before it gets waiting.
For example, your function could just collect the arriving messages:
book_snapshots = []
def my_nasdaq_book_handler(msg):
book_snapshots.append(msg)
A global object book_snapshots is used in the implementation. You may expand/change this function at will (of course translating the information into JSON format will help you accessing it in a structured way). This line will register your function:
stream_client.add_nasdaq_book_handler(my_nasdaq_book_handler)
I have a Python Firebase SDK on the server, which writes to Firebase real-time DB.
I have a Javascript Firebase client on the browser, which registers itself as a listener for "child_added" events.
Authentication is handled by the Python server.
With Firebase rules allowing reads, the client listener gets data on the first event (all data at that FB location), but only a key with empty data on subsequent child_added events.
Here's the listener registration:
firebaseRef.on
(
"child_added",
function(snapshot, prevChildKey)
{
console.log("FIREBASE REF: ", firebaseRef);
console.log("FIREBASE KEY: ", snapshot.key);
console.log("FIREBASE VALUE: ", snapshot.val());
}
);
"REF" is always good.
"KEY" is always good.
But "VALUE" is empty after the first full retrieval of that db location.
I tried instantiating the firebase reference each time anew inside the listen function. Same result.
I tried a "value" event instead of "child_added". No improvement.
The data on the Firebase side looks perfect in the FB console.
Here's how the data is being written by the Python admin to firebase:
def push_value(rootAddr, childAddr, data):
try:
ref = db.reference(rootAddr)
posts_ref = ref.child(childAddr)
new_post_ref = posts_ref.push()
new_post_ref.set(data)
except Exception:
raise
And as I said, this works perfectly to put the data at the correct place in FB.
Why the empty event objects after the first download of the database, on subsequent events?
I found the answer. Like most things, it turned out to be simple, but took a couple of days to find. Maybe this will save someone else.
On the docs page:
http://firebase.google.com/docs/database/admin/save-data#section-push
"In JavaScript and Python, the pattern of calling push() and then
immediately calling set() is so common that the Firebase SDK lets you
combine them by passing the data to be set directly to push() as
follows..."
I suggest the wording should emphasize that you must do it that way.
The earlier Python example on the same page doesn't work:
new_post_ref = posts_ref.push()
new_post_ref.set({
'author': 'gracehop',
'title': 'Announcing COBOL, a New Programming Language'
})
A separate empty push() followed by set(data) as in this example, won't work for Python and Javascript because in those cases the push() implicitly also does a set() and so an empty push triggers unwanted event listeners with empty data, and the set(data) didn't trigger an event with data, either.
In other words, the code in the question:
new_post_ref = posts_ref.push()
new_post_ref.set(data)
must be:
new_post_ref = posts_ref.push(data)
with set() not explicitly called.
Since this push() code happens only when new objects are written to FB, the initial download to the client wasn't affected.
Though the documentation may be trying to convey the evolution of the design, it fails to point out that only the last Python and Javascript example given will work and the others shouldn't be used.
I'm new to both flask and python. I've got an application I'm working on to hold weather data. I'm allowing for both get and post commands to come into my flask application. unfortunately, the automated calls for my API are not always coming back with the proper results. I'm currently storing my data in a global variable when a post command is called, the new data is appended to my existing data. Unfortunately sometimes when the get is called, it is not receiving the most up to date version of my global data variable. I believe that the issue is that the change is not being passed up from the post function to the global variable before the get is called because I can run the get and the proper result comes back.
weatherData = [filed with data read from csv on initialization]
class FullHistory(Resource):
def get(self):
ret = [];
for row in weatherData:
val = row['DATE']
ret.append({"DATE":str(val)})
return ret
def post(self):
global weatherData
newWeatherData = weatherData
args = parser.parse_args()
newVal = int(args['DATE'])
newWeatherData.append({'DATE':int(args['DATE']),'TMAX':float(args['TMAX']),'TMIN':float(args['TMIN'])})
weatherData = newWeatherData
#time.sleep(5)
return {"DATE":str(newVal)},201
class SelectHistory(Resource):
def get(self, date_id):
val = int(date_id)
bVal = False
#time.sleep(5)
global weatherData
for row in weatherData:
if(row['DATE'] == val):
wd = row
bVal = True
break
if bVal:
return {"DATE":str(wd['DATE']),"TMAX":float(wd['TMAX']),"TMIN":float(wd['TMIN'])}
else:
return "HTTP Error code 404",404
def delete(self, date_id):
val = int(date_id)
wdIter = None
for row in weatherData:
if(row['DATE'] == val):
wdIter = row
break
if wdIter != None:
weatherData.remove(wdIter)
return {"DATE":str(val)},204
else:
return "HTTP Error code 404",404
Is there any way I can assure that my global variable is up to date or make my API wait to return until I'm sure that the update has been passed along? This was supposed to be a simple application. I would really rather not have to learn how to use threads in python just yet. I've made sure that my calls get request is not starting until after the post has given a response. I know that one workaround was to use sleep to delay my responses, I would rather understand why my update isn't occurring immediately in the first place.
I believe your problem is the application context. As stated here:
The application context is created and destroyed as necessary. It
never moves between threads and it will not be shared between
requests. As such it is the perfect place to store database connection
information and other things. The internal stack object is called
flask._app_ctx_stack. Extensions are free to store additional
information on the topmost level, assuming they pick a sufficiently
unique name and should put their information there, instead of on the
flask.g object which is reserved for user code.
Though it says you can store data at the "topmost level," it's not reliable, and if you extrapolate your project to use worker processes with uWSGI, for instance, you'll need persistence to share data between threads regardless. You should be using a database, redis, or at very least updating your .csv file each time you mutate your data.
I have a Client that currently does the following:
connects
collects some data locally
sends that data to a server
repeats
if disconnected, reconnects and continues the above (not shown)
Like this:
def do_send(self):
def get_data():
# do something
return data
def send_data(data)
self.sendMessage(data)
return deferToThread(get_data).addCallback(send_data)
def connectionMade(self):
WebSocketClientProtocol.connectionMade(self)
self.sender = task.LoopingCall(self.do_send)
self.sender.start(60)
However, when disconnected, I would like the data collection to continue, probably queuing and writing to file at a certain limit. I have reviewed the DeferredQueue object which seems like what I need, but I can't seem to crack it.
In pseudo-code, it would go something like this:
queue = DeferredQueue
# in a separate class from the client protocol
def start_data_collection():
self.collecter = task.LoopingCall(self.get_data)
self.sender.start(60)
def get_data()
# do something
queue.put(data)
Then have the client protocol check the queue, which is where I get lost. Is DeferredQueue what I need, or is there a better way?
A list would work just as well. You'll presumably get lost in the same place - how do you have the client protocol check the list?
Either way, here's one answer:
queued = []
...
connecting = endpoint.connect(factory)
def connected(protocol):
if queued:
sending = protocol.sendMessage(queued.pop(0))
sending.addCallback(sendNextMessage, protocol)
sending.addErrback(reconnect)
connecting.addCallback(connected)
The idea here is that at some point an event happens: your connection is established. This example represents that event as the connecting Deferred. When the event happens, connected is called. This example pops the first item from the queue (a list) and sends it. It waits for the send to be acknowledged and then sends the next message. It also implies some logic about handling errors by reconnecting.
Your code could look different. You could use the Protocol.connectionMade callback to represent the connection event instead. The core idea is the same - define callbacks to handle certain events when they happen. Whether you use an endpoint's connect Deferred or a protocol's connectionMade doesn't really matter.