get live price in milliseconds (Binance Websocket) - python

how can i change my code so i get the informations every 100 milliseconds ?
import asyncio
from binance import AsyncClient, BinanceSocketManager
async def main():
client = await AsyncClient.create()
bm = BinanceSocketManager(client)
# start any sockets here, i.e a trade socket
ts = bm.trade_socket('BTCBUSD')
# then start receiving messages
async with ts as tscm:
while True:
res = await tscm.recv()
print(res)
await client.close_connection()
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
i apperciate every answer i can get , thanks a lot !

You asked about "how to get live price in milliseconds (Binance Websocket)"
Here is a list of available streams on binance with the available options for "update speed": https://github.com/binance/binance-spot-api-docs/blob/master/web-socket-streams.md#detailed-stream-information
For my understand most stream types are updating once a second (update speed: 1000ms).
Only depth streams are possible to update 10 times per second (1000ms and 100ms interval)
Trade streams (agg and normal) and Bookticker (individual and all) are in real time. That means, as soon this info is available, you will receive it...
If there is no trade, you will not receive anything, because there is no change in price... but as soon a trade happens, it will get reported to you.
If you want to know, the current buy and sell price for that an asset is available you can use the bookticker which is much less data compared to depth and diff. depth streams... if you need more than the first positions of the current orderbook i recommend using a local depth cache: https://www.lucit.tech/unicorn-binance-local-depth-cache.html
To get a stable websocket connection i recommend using UNICORN Binance WebSocket API, it catches most exceptions and reconnects automatically after a disconnect, it uses asyncio inside (callback function is inside an event loop) and the syntax to use it is easy:
from unicorn_binance_websocket_api.manager import BinanceWebSocketApiManager
def process_new_receives(stream_data, stream_buffer_name=False):
print(str(stream_data))
ubwa = BinanceWebSocketApiManager(exchange="binance.com")
ubwa.create_stream('trade',
['ethbtc', 'btcusdt', 'bnbbtc', 'ethbtc'],
process_stream_data=process_new_receives)

Since you seem in a rush, below is what I use although I'm using the websockets library to make the calls. I'll take a look at the binance api when I have some more time to see if I can get the calls to be faster but this should hopefully achieve what you want.
You can change the delay between the requests by changing the time of sleep in await asyncio.sleep(0.5) but if you put it any lower than 0.5 seconds it will trigger an error: received 1008 (policy violation) Too many requests; then sent 1008 (policy violation) Too many requests
import asyncio
import websockets
import json
msg = {"method": "SUBSCRIBE", "params":
[
"btcusdt#depth"
],
"id": 1
}
async def call_api():
async with websockets.connect('wss://stream.binance.com:9443/ws/btcusdt#depth') as ws:
while True:
await ws.send(json.dumps(msg))
response = await asyncio.wait_for(ws.recv(), timeout=2)
response = json.loads(response)
print(response)
await asyncio.sleep(0.5)
asyncio.get_event_loop().run_until_complete(call_api())

Try this out:
import asyncio
import websockets
import json
async def hello():
async with websockets.connect("wss://stream.binance.com:9443/ws/btcusdt#bookTicker") as ws:
while True:
response = await asyncio.wait_for(ws.recv(), timeout=2)
response=json.loads(response)
print(response)
await asyncio.sleep(0.5)
asyncio.get_event_loop().run_until_complete(hello())

Related

How to connect to User Data Stream Binance?

I need to listen to User Data Stream, whenever there's an Order Event - order execution, cancelation, and so on - I'd like to be able to listen to those events and create notifications.
So I got my "listenKey" and I'm not sure if it was done the right way but I executed this code and it gave me something like listenKey.
Code to get listenKey:
def get_listen_key_by_REST(binance_api_key):
url = 'https://api.binance.com/api/v1/userDataStream'
response = requests.post(url, headers={'X-MBX-APIKEY': binance_api_key})
json = response.json()
return json['listenKey']
print(get_listen_key_by_REST(API_KEY))
And the code to listen to User Data Stream - which doesn't work, I get no json response.
socket = f"wss://fstream-auth.binance.com/ws/btcusdt#markPrice?listenKey=<listenKeyhere>"
def on_message(ws, message):
json_message = json.loads(message)
print(json_message)
def on_close(ws):
print(f"Connection Closed")
# restart()
def on_error(ws, error):
print(f"Error")
print(error)
ws = websocket.WebSocketApp(socket, on_message=on_message, on_close=on_close, on_error=on_error)
I have read the docs to no avail. I'd appreciate it if someone could point me in the right direction.
You can create a basic async user socket connection from the docs here along with other useful info for the Binance API. Here is a simple example:
import asyncio
from binance import AsyncClient, BinanceSocketManager
async def main():
client = await AsyncClient.create(api_key, api_secret, tld='us')
bm = BinanceSocketManager(client)
# start any sockets here, i.e a trade socket
ts = bm.user_socket()
# then start receiving messages
async with ts as tscm:
while True:
res = await tscm.recv()
print(res)
await client.close_connection()
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
I just figured this out myself and I was able to get mine to work so I'll try my best to guide you. I believe you're just missing this line of code after you create your WebSocket object:
ws.run_forever()
Some other reasons it might not be working; If you want to detect orders on your futures account then you need to use the futures endpoint. I think the one your using is for spot trading (Not sure).
url = 'https://fapi.binance.com'
and just in case it's not clear to you. You must replace:
<listenkeyhere>
in the socket url with your listen key, angle brackets, and all.

Python aiohttp ClientSession requests memory leak?

I believe I have unearthed a memory leak in my long-lived application when using aiohttp ClientSession requests. If each coroutine which makes a request is awaited sequentially, then all seems fine. However there seems to be a leak of request context manager objects when run concurrently.
Please consider the following example code:
import logging
import tracemalloc
import asyncio
import aiohttp
async def log_allocations_coro():
while True:
await asyncio.sleep(120)
snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics('lineno')
str_list = [str(x) for x in top_stats[:5]]
logging.info("\n".join(str_list))
async def do_request():
try:
async with session.request("GET", "http://192.168.1.1") as response:
text = await response.text()
except:
logging.exception("Request failed")
async def main():
tracemalloc.start()
asyncio.ensure_future(log_allocations_coro())
timeout = aiohttp.ClientTimeout(total=1)
global session
session = aiohttp.ClientSession(timeout=timeout)
while True:
tasks = [ do_request(), do_request() ]
await asyncio.gather(*tasks)
await asyncio.sleep(2)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s %(message)s', level=logging.INFO)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
The tracemalloc coroutine logs memory allocations every two minutes. This shows the count for allocations in aiohttp/client.py where request() returns a _RequestContextManager increasing over time, initially quickly but then slows until reaching a peak and then seems to be fairly stable.
However, It has then been observed that if there is a network problem and requests start to fail then the count ramps back up again - and doesn't come back down after the problem has been resolved.
Is this a leak? If so, is there a way to work around it?
Thanks for reading!

Send / receive in parallel using websockets in Python FastAPI

I will try to explain what I am doing with an example, say I am building a weather client. The browser sends a message over websocket, eg:
{
"city": "Chicago",
"country": "US"
}
The server queries the weather every 5 minutes and updates the browser back with the latest data.
Now the browser could send another message, eg:
{
"city": "Bangalore",
"country": "IN"
}
Now I the server should STOP updating the weather details of Chicago and start updating the details about Bangalore, i.e. simultaneously send / receive messages over websocket. How should I go about implementing this?
Currently I have this but this only updates the browser on receiving an event:
#app.websocket("/ws")
async def read_webscoket(websocket: WebSocket):
await websocket.accept()
weather_client = WeatherClient(client)
while True:
data = await websocket.receive_json()
weather = await weather_client.weather(data)
await websocket.send_json(weather.dict())
If I move websocket.receive_json() outside the loop, I won't be able to continuously listen to the message from browser. I guess I need to spin up two asyncio tasks but I am not quite able to nail down the implementation since I am new to asynchronous way of programming.
The simplest way to do this is like you mentioned moving the reading outside of the loop in a separate task. In this paradigm you'll need to update a local variable with the latest data, making your code look something like this:
#app.websocket("/ws")
async def read_webscoket(websocket: WebSocket):
await websocket.accept()
json_data = await websocket.receive_json()
async def read_from_socket(websocket: WebSocket):
nonlocal json_data
async for data in websocket.iter_json():
json_data = data
asyncio.create_task(read_from_socket(websocket))
while True:
print(f"getting weather data for {json_data}")
await asyncio.sleep(1) # simulate a slow call to the weather service
Note I've used the iter_json asynchronous generator, which amounts to an infinite loop over receive_json.
This will work but may have a bug depending on your requirements. Imagine that the weather service takes 10 seconds to complete and in that time the user sends three requests for different cities over the socket. In the code above you'll only get the latest city the user sent. That might be fine for your application, but if you need to keep track of all that the user sent you'll need to use a queue. In this paradigm you'll have one task reading data and putting it on the queue and one task getting data from the queue and querying the weather service. You'll then run these concurrently with gather.
#app.websocket("/wsqueue")
async def read_webscoket(websocket: WebSocket):
await websocket.accept()
queue = asyncio.queues.Queue()
async def read_from_socket(websocket: WebSocket):
async for data in websocket.iter_json():
print(f"putting {data} in the queue")
queue.put_nowait(data)
async def get_data_and_send():
data = await queue.get()
while True:
if queue.empty():
print(f"getting weather data for {data}")
await asyncio.sleep(1)
else:
data = queue.get_nowait()
print(f"Setting data to {data}")
await asyncio.gather(read_from_socket(websocket), get_data_and_send())
In this way, you won't lose data the user sends. In the example above, I only get weather data for the latest the user requests, but you still have access to all data sent.
EDIT: To answer your question in the comments, a queue approach is probably best to cancel tasks when new requests come in. Basically move the long-running task you want to be able to cancel into its own coroutine function (in this example read_and_send_to_client) and run it as a task. When new data comes in, if that task is not finished, cancel it and then create a new one.
async def read_and_send_to_client(data):
print(f'reading {data} from client')
await asyncio.sleep(10) # simulate a slow call
print(f'finished reading {data}, sending to websocket client')
#app.websocket("/wsqueue")
async def read_webscoket(websocket: WebSocket):
await websocket.accept()
queue = asyncio.queues.Queue()
async def read_from_socket(websocket: WebSocket):
async for data in websocket.iter_json():
print(f"putting {data} in the queue")
queue.put_nowait(data)
async def get_data_and_send():
data = await queue.get()
fetch_task = asyncio.create_task(read_and_send_to_client(data))
while True:
data = await queue.get()
if not fetch_task.done():
print(f'Got new data while task not complete, canceling.')
fetch_task.cancel()
fetch_task = asyncio.create_task(read_and_send_to_client(data))
await asyncio.gather(read_from_socket(websocket), get_data_and_send())

Safely awaiting two event sources in asyncio/Quart

Quart is a Python web framework which re-implements the Flask API on top of the asyncio coroutine system of Python. In my particular case, I have a Quart websocket endpoint which is supposed to have not just one source of incoming events, but two possible sources of events which are supposed to continue the asynchronous loop.
An example with one event source:
from quart import Quart, websocket
app = Quart(__name__)
#app.websocket("/echo")
def echo():
while True:
incoming_message = await websocket.receive()
await websocket.send(incoming_message)
Taken from https://pgjones.gitlab.io/quart/
This example has one source: the incoming message stream. But what is the correct pattern if I had two possible sources, one being await websocket.receive() and another one being something along the lines of await system.get_next_external_notification() .
If either of them arrives, I'd like to send a websocket message.
I think I'll have to use asyncio.wait(..., return_when=FIRST_COMPLETED), but how do I make sure that I miss no data (i.e. for the race condition that websocket.receive() and system.get_next_external_notification() both finish almost exactly at the same time) ? What's the correct pattern in this case?
An idea you could use is a Queue to join the events together from different sources, then have an async function listening in the background to that queue for requests. Something like this might get you started:
import asyncio
from quart import Quart, websocket
app = Quart(__name__)
#app.before_serving
async def startup():
print(f'starting')
app.q = asyncio.Queue(1)
asyncio.ensure_future(listener(app.q))
async def listener(q):
while True:
returnq, msg = await q.get()
print(msg)
await returnq.put(f'hi: {msg}')
#app.route("/echo/<message>")
async def echo(message):
while True:
returnq = asyncio.Queue(1)
await app.q.put((returnq, message))
response = await returnq.get()
return response
#app.route("/echo2/<message>")
async def echo2(message):
while True:
returnq = asyncio.Queue(1)
await app.q.put((returnq, message))
response = await returnq.get()
return response

Enforce serial requests in asyncio

I've been using asyncio and the http requests package aiohttp recently and I've run into a problem.
My application talks to a REST API.
for some API endpoints it makes sense to be able to dispatch multiple requests in parallel. Eg. sending different queries in the request to the same endpoint to get different data.
Though for some endpoints, this doesn't make sense. As in the endpoint always takes the same arguments (authentication) and returns requested information. (No point asking for the same data multiple times before the server has responded once) For these endpoints I need to enforce a 'serial' flow of requests. In that my program should only be able to send a request when it's not waiting for a response. (the typical behavior of blocking requests).
Of course I don't want to block.
This is an abstraction of what I intend to do. Essentially wrap the endpoint in an async generator that enforces this serial behavior.
I feel like I'm reinventing the wheel, Is there a common solution to this issue?
import asyncio
from time import sleep
# Encapsulate the idea of an endpoint that can't handle multiple requests
async def serialendpoint():
count = 0
while True:
count += 1
await asyncio.sleep(2)
yield str(count)
# Pretend client object
class ExampleClient(object):
gen = serialendpoint()
# Simulate a http request that sends multiple requests
async def simulate_multiple_http_requests(self):
print(await self.gen.asend(None))
print(await self.gen.asend(None))
print(await self.gen.asend(None))
print(await self.gen.asend(None))
async def other_stuff():
for _ in range(6):
await asyncio.sleep(1)
print('doing async stuff')
client = ExampleClient()
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(client.simulate_multiple_http_requests(),
client.simulate_multiple_http_requests(),
other_stuff()))
outputs
doing async stuff
1
doing async stuff
doing async stuff
2
doing async stuff
doing async stuff
3
doing async stuff
4
5
6
7
8
update
This is the actual async generator I implemented:
All the endpoints that require serial behavior get assigned a serial_request_async_generator during the import phase. Which meant I couldn't initialize them with an await 'async_gen'.asend(None) as the await is only allowed in an async coroutine. The compromise is that every serial request at runtime must .asend(None) before asending the actual arguments. There must be a better way!
async def serial_request_async_generator():
args, kwargs = yield
while True:
yield await request(*args, **kwargs) # request is an aiohttp request
args, kwargs = yield

Categories

Resources