I'm trying to use the pybit package (https://github.com/verata-veritatis/pybit) on a crypto exchange, but when i try to fetch the data from the websocket, all I get is an empty object as a response.
import pybit
endpoint_public = 'wss://stream.bybit.com/realtime_public'
subs = [
'orderBookL2_25.BTCUSD',
'instrument_info.100ms.BTCUSD',
'last_price.BTCUSD'
]
ws_unauth = WebSocket(endpoint_public, subscriptions=subs)
ws_unauth.fetch('last_price.BTCUSD')
the output is this
{}
EDIT: 2022.09.19
It seems they changed code in module and examples in documentation are different. They don't use fetch() but they assign subscription to functions - handlers - and websocket runs own (hidden) loop to fetch data and execute assigned function.
I found three problems:
First: code works for me if I use endpoint realtime instead of realtime_public - I found it in somewhere in ByBit API documentation (not in documentation for Python module)
Second: there is no 'last_price.BTCUSD' in documentation - and this generate errors when I try it with endpoint realtime - and other subscriptions don't work.
Third: first fetch may not give result and it may need to sleep() short time before first fetch. Normally code should run in some loop and get data every few (milli)seconds and then it makes no problem. You could also use if to run some code only if you get data.
import pybit
import time
endpoint_public = 'wss://stream.bybit.com/realtime'
subs = [
'orderBookL2_25.BTCUSD',
'instrument_info.100ms.BTCUSD',
# 'last_price.BTCUSD'
]
ws_unauth = pybit.WebSocket(endpoint_public, subscriptions=subs)
time.sleep(1)
#print(ws_unauth.fetch('last_price.BTCUSD')) # doesn't work with `realtime_public`; generate error with `realtime`
print(ws_unauth.fetch('orderBookL2_25.BTCUSD')) # doesn't work with `realtime_public`; works with `realtime`
Result:
[
{'price': '40702.50', 'symbol': 'BTCUSD', 'id': 407025000, 'side': 'Buy', 'size': 350009},
{'price': '40703.00', 'symbol': 'BTCUSD', 'id': 407030000, 'side': 'Buy', 'size': 10069},
{'price': '40705.00', 'symbol': 'BTCUSD', 'id': 407050000, 'side': 'Buy', 'size': 28},
# ...
]
BTW:
ByBit API Documentation shows also examples for Public Topic.
They use:
realtime instead of realtime_public,
loop to fetch data periodically,
if data to skip empty response.
from pybit import WebSocket
subs = [
"orderBookL2_25.BTCUSD"
]
ws = WebSocket(
"wss://stream-testnet.bybit.com/realtime",
subscriptions=subs
)
while True:
data = ws.fetch(subs[0])
if data:
print(data)
ByBit API Documentation shows also examples for Private Topic.
They also use:
realtime instead of realtime_public ( + api_key, api_secret),
loop to fetch data periodically,
if data to skip empty response.
For test they use stream-testnet but in real code it should use stream.
Related
I'v recently started using sonar.cloud.whatever.com. on the profolio I have by default the analysisi result for a master branch by default for any given project so if I need to collect the information for multiple branched in this project I have to select those branched one by one.
For the moment I don't know if there is any other simple way to do this without given by sonar
so I stated a script on python using the webservice API given by sonar .
I stated (just to try the result) by collecting all the issues using /api/issues/search
import json
import sys
import os
import requests
def usage():
print("hello")
def collectIssues():
r=requests.get('https://sonar.cloud.sec.NotToMention.com/api/issues/search? componentKeys=project_key&statuses=OPEN')
print("le code ",r.status_code)
print(r.url)
#print(r.headers)
if(r.status_code!=200):
exit(0)
data=r.json()
print(r.headers,"\n\n")
print(data)
print(data['issues'])
def main(args):
collectIssues()
if __name__== '__main__':
main(sys.argv)
exit(0)
if I copy the link and browse it I have the right result with total 1000 issues, but the result of this script gives total 0 and issues = [].
(I want just to sign that project_name and NotToMention are not the real values I replaced them here for security issues.)
the result of this script is :
status_code : 200
https://sonar.cloud.NotTOMention.com/api/issues/search?componentKeys=project_name&statuses=OPEN
JSON RESULT :
{'total': 0, 'p': 1, 'ps': 100, 'paging': {'pageIndex': 1, 'pageSize': 100, 'total': 0}, 'effortTotal': 0, 'debtTotal': 0, 'issues': [], 'components': [], 'facets': []}
thanks for any advice.
best regards
I am processing kafka streams in a python flask server. I read the responses with json and need to extract the udid values from the stream. I read each response with request.json and save it in a dictionary. When i try to parse it fails. the dict contains the following values
dict_items([('data', {'SDKVersion': '7.1.2', 'appVersion': '6.5.5', 'dateTime': '2019-08-05 15:01:28+0200', 'device': 'iPhone', 'id': '3971',....})])
parsing the dict the normal way doesnt work ie event_data['status'] gives error.Perhaps it is because its not a pure dict....?
#app.route('/data/idApp/5710/event/start', methods=['POST'])
def give_greeting():
print("Hola")
event_data = request.json
print(event_data.items())
print(event_data['status'])
#print(event_data['udid'])
#print(event_data['Additional'])
return 'Hello, {0}!'.format(event_data)
The values contained in event data are the following
dict_items([('data', {'SDKVersion': '7.1.2', 'appVersion': '6.5.5', 'dateTime': '2019-08-05 15:01:28+0200', 'device': 'iPhone', 'id': '3971',....})])
The expected result would be this result
print(event_data['status'])->start
print(event_data['udid'])->BAEB347B-9110-4CC8-BF99-FA4039C4599B
print(event_data['SDKVersion'])->7.1.2
etc
the output of
print(event_data.keys()) is dict_keys(['data'])
The data you are expecting is wrapped in an additional data property.
You only need to do one extra step to access this data.
data_dict = request.json
event_data = data_dict['data']
Now you should be able to access the information you want with
event_data['SDKVersion']
...
as you have described above.
As #jonrsharpe stated, this is not an issue with the parsing. The parsing either fails or succeeds, but you will never get a "broken" object (be it dict, list, ...) from parsing JSON.
I implemented a simple batch upload with the following code, assuming I could aggregate a pre-set number of Json docs (aka ecpDocSorted which is a dict) into a bulk list and flush it e.g. after collecting 5 docs. The ecpDocSorted contains a simple Json structure - all key/values with no ids.
The code snippet looks like this
#
# Sorting the ecpDoc by keys
#
for k in sorted(ecpDoc.keys()):
ecpDocSorted[k] = ecpDoc[k]
ecpDoc = dict(ecpDocSorted)
#
# Insert into MongoDB
#
bulk.append(ecpDocSorted)
if len(bulk) == 5:
# insert into Mongo
result = mycol.insert_many(bulk)
print(result)
bulk = []
Uploading an individual doc (using len(bulk)==1) works fine and the document ends up in Mongo.
For any other number (e.g. len(bulk)==5) it fails with the following error:
raise BulkWriteError(full_result)
pymongo.errors.BulkWriteError: batch op errors occurred
What am I missing?
Added based on comment:
ecpDocSorted example:
{'address1': 'SOME ADDRESS', 'city': 'Arecibo', 'country': 'US', 'languages': 'English', 'name': 'SOME NAME', 'phone': '123-123-1234', 'postalcode': '00612'}
The issue seems to be with ecpDocSorted.
When using
bulk.append(ecpDoc)
instead of
bulk.append(ecpDocSorted)
it just works fine.
I'm not having any luck finding this browsing through the API documentation. I would be surprised if this wasn't possible. I have this to create a snapshot using boto:
conn.create_snapshot(volume_id, "This shows up in the description column")
This works, but I would like to properly tag the snapshot with a {Name: "my tag"}. Does anyone know if there is a way to do this while creating the snapshot? If that's not possible, is it possible to add a tag to the snapshot object after creation?
It's not possible to add tags when you create a Snapshot. The EC2 API does not support that. However, you can easily tag a Snapshot after creating it. There are a couple of ways to do that.
The first uses the Snapshot object returned by the create_snapshot method:
snapshot = conn.create_snapshot(volume_id, "This shows up in the description column")
snapshot.add_tags({'foo': 'bar', 'fie': 'bas'})
Or, you can use the generic create_tags method which can be used to add tags to any tag-able resource:
conn.create_tags('snap-12345678', {'foo': 'bar', 'fie': 'baz'})
As per this AWS announcement it is possible to pass tags when creating the snapshot since April 2018.
Here is a snippet doing just that:
snap = ec.create_snapshot(
Description = "Recent Snapshot",
VolumeId = volume_id,
TagSpecification = [{
'ResourceType': 'snapshot',
'Tags': [
{'Key': 'Name', 'Value': snapshot},
{'Key': 'InstanceId', 'Value': instance_id}
]
}]
)
As you can see, the former ec2.create_tags() call has been merged into this.
I have managed to do it using the create_tags method only (the add_tags method raised the AttributeError error as mentioned by JavaQueen above in comments).
Example:
snapshot = conn.create_snapshot(volume_id, "This shows up in the description column")
conn.create_tags(
Resources=[
snapshot['SnapshotId'],
],
Tags=[
{'Key': 'Name', 'Value': 'myTagValue'}
]
)
Inspired from Eddie Trejo contribution: https://stackoverflow.com/a/44796462/1973233
Note: you may have to update the underlying policy to be granted to the Allow: ec2:CreateSnapshot permission.
I'm trying to transcode some videos, but something is wrong with the way I am connecting.
Here's my code:
transcode = layer1.ElasticTranscoderConnection()
transcode.DefaultRegionEndpoint = 'elastictranscoder.us-west-2.amazonaws.com'
transcode.DefaultRegionName = 'us-west-2'
transcode.create_job(pipelineId, transInput, transOutput)
Here's the exception:
{u'message': u'The specified pipeline was not found: account=xxxxxx, pipelineId=xxxxxx.'}
To connect to a specific region in boto, you can use:
import boto.elastictranscoder
transcode = boto.elastictranscoder.connect_to_region('us-west-2')
transcode.create_job(...)
I just started using boto the other day, but the previous answer didn't work for me - don't know if the API changed or what (seems a little weird if it did, but anyway). This is how I did it.
#!/usr/bin/env python
# Boto
import boto
# Debug
boto.set_stream_logger('boto')
# Pipeline Id
pipeline_id = 'lotsofcharacters-393824'
# The input object
input_object = {
'Key': 'foo.webm',
'Container': 'webm',
'AspectRatio': 'auto',
'FrameRate': 'auto',
'Resolution': 'auto',
'Interlaced': 'auto'
}
# The object (or objects) that will be created by the transcoding job;
# note that this is a list of dictionaries.
output_objects = [
{
'Key': 'bar.mp4',
'PresetId': '1351620000001-000010',
'Rotate': 'auto',
'ThumbnailPattern': '',
}
]
# Phone home
# - Har har.
et = boto.connect_elastictranscoder()
# Create the job
# - If successful, this will execute immediately.
et.create_job(pipeline_id, input_name=input_object, outputs=output_objects)
Obviously, this is a contrived example and just runs from a standalone python script; it assumes you have a .boto file somewhere with your credentials in it.
Another thing to note is the PresetId's; you can find these in the AWS Management Console for Elastic Transcoder, under Presets. Finally, the values that can be stuffed in the dictionaries are lifted verbatim from the following link - as far as I can tell, they are just interpolated into a REST call (case sensitive, obviously).
AWS Create Job API