Get all transactions from OKX with python - python

I try to make full overview over my transaktions (bye/sell/deposit/withdrawl/earnings and boot trades) with python for Okx, but I get only 2 Trades (but I have made more than 2).
I have tried to send request with orders-history-archive and fetchMyTrades from CCXT Library (have tried some other functions, but I steed don't get my transactions.)
Is there some way to get full overview for Okx with python (and other Brocker/Wallets)?
here How I try to get the data with CCXT (it give only 2 outputs):
def getMyTrades(self):
tData = []
tSymboles = [
'BTC/USDT',
'ETH/USDT',
'SHIB/USDT',
'CELO/USDT',
'XRP/USDT',
'SAMO/USDT',
'NEAR/USDT',
'ETHW/USDT',
'DOGE/USDT',
'SOL/USDT',
'LUNA/USDT'
]
for item in tSymboles:
if exchange.has['fetchMyTrades']:
since = exchange.milliseconds() - 60*60*24*180*1000 # -180 days from now
while since < exchange.milliseconds():
symbol = item # change for your symbol
limit = 20 # change for your limit
orders = exchange.fetchMyTrades(symbol, since, limit)
if len(orders):
since = orders[len(orders) - 1]['timestamp'] + 1
tData += orders
else:
break

Related

Retrieve all the API response while you have maximum offset in Python

I am attempting to retrieve data from this API that has a max offset of 200000. The records I am attempting to pull are more than the max offset. Below is a sample of the code I am using but when I reach the offset limit of 200000 it breaks (the API doesn't return any helpful response in terms of how many pages/requests I need to do that's why I am going until there are no more results ). I need to find a way to loop through and pull all the data. Thanks
def pull_api_data():
offset_limit = 0
teltel_data = []
# Loop through the results and add if present
while True:
print("Skip", offset_limit, "rows before beginning to return results")
querystring = {"offset": "{}".format(offset_limit), "filter": "starttime>="'{}'.format(date_filter), "limit" : "5000"}
response = session.get(url=url, headers=the_headers, params=querystring)
data = response.json()['data']
# Do we have more data from teltel ?
if len(data) == 0:
break
# If yes ,then add the data to the main list ,teltel_data
teltel_data.extend(data)
# Increase offset_limit to skip the already added data
offset_limit = offset_limit + 5000
# transform the raw data by converting it to a dataframe and do necessary cleaning
pull_api_data()

Problem using cloned array data without modifying original (Python)

I have a project to do for a Python initiation course, but I am stuck close to the end because of a problem.
My problem is the following one :
I want to use a double of my "tdata" data frame composed of the values of the different attributes of a class called "world" to make changes to it. (Trying to do some forecast with the current levels of the indicators)
I tried to do it by generating a new data frame "graphdat" which I used in a function to generate a graph.
My problem is that, in the end, my "tdata" array is also modified.
I tried to use graphdat = tdata.copy() , but it returns an AttributeError : 'world' object has no attribute 'copy'.
Anyone would know how I could do it in another way?
Thank you!
def graph_ppm(self):
self.price_ppm = 10
self.budget -= self.price_ppm
period = tdata.period
graphdat = tdata
while period < 30:
period +=1
graphdat.sup = (graphdat.emit - graphdat.absorb)
graphdat.ppm += graphdat.sup
yppm.append(round(graphdat.ppm,2))
EDIT:
I think I misunderstood the whole problem.
As suggested by Md Imbesat Hassan Rizvi, I decided to use graphdat = copy.deepcopy(tdata) but I want to use this function a multiple-time, I do want to reinitialize graphdat to the current level of the parameters and the current period.
The problem is that I obtain this kind of graph if a run this function multiple times :
Graph
My maximum period is 30, and I want to get rid of the past values creating a very new graph.
def graph_temp(self):
self.price_temp = 10
self.budget -= self.price_temp
graphdat = copy.deepcopy(tdata)
period = graphdat.period
plx.clear_plot()
while period < 30:
period +=1
graphdat.sup = (graphdat.emit - graphdat.absorb)
graphdat.ppm += graphdat.sup
if graphdat.ppm <380:
graphdat.temperature += graphdat.sup * 0.001
if graphdat.ppm <400:
graphdat.temperature += (graphdat.sup) * 0.001
if graphdat.ppm <450:
graphdat.temperature += (graphdat.sup) * 0.005
graphdat.pop_satisfaction -=1
else:
graphdat.temperature += (graphdat.sup) * 0.01
ytemp.append(round(graphdat.temperature,2))
limittemp = [2]*31
recomtemp = [1.5]*31
plx.plot(ytemp, label="Temperatures forecast",line_marker = "•")
plx.plot(limittemp, label="Catastrophe level",line_marker = "-")
plx.plot(recomtemp, label="Limit level after period 30",line_marker = "=")
plx.xlabel('Temperatures')
plx.ylabel('Period')
plx.title('Title')
plx.figsize(50, 25)
plx.ticks(31, 11)
return plx.show()
Since tdata appears to be an instance of a custom class world for which copy attribute doesn't exist, you can make a copy of it using methods from copy module:
import copy
graphdat = copy.deepcopy(tdata)
Henceforth, graphdat and tdata will be different instances of the world class.

How to loop through millions of Django model objects without getting an out of range or other error

I have millions of objects in a Postgres database and I need to send data from 200 of them at a time to an API, which will give me additional information (the API can only deal with up to 200 elements at a time). I've tried several strategies. The first strategy ended up with my script getting killed because it used too much memory. This attempt below worked better, but I got the following error: django.db.utils.DataError: bigint out of range. This error happened around when the "start" variable reached 42,000. What is a more efficient way to accomplish this task? Thank you.
articles_to_process = Article.objects.all() # This will be in the millions
dois = articles_to_process.values_list('doi', flat=True) # These are IDs of articles
start = 0
end = 200 # The API to which I will send IDs can only return up to 200 records at a time.
number_of_dois = dois.count()
times_to_loop = (number_of_dois / 200) + 1
while times_to_loop > 0:
times_to_loop = times_to_loop - 1
chunk = dois[start:end]
doi_string = ', '.join(chunk)
start = start + 200
end = end + 200
[DO API CALL, GET DATA FOR EACH ARTICLE, SAVE THAT DATA TO ARTICLE]
Consider using iterator:
chunk_size = 200
counter = 0
idx = []
for article_id in dois.iterator(chunk_size):
counter += 1
idx.append(str(article_id))
if counter >= chunk_size:
doi_string = ', '.join(idx)
idx = []
counter = 0
# DO API CALL, GET DATA FOR EACH ARTICLE, SAVE THAT DATA TO ARTICLE

IB_insync returning tiny integer for shares (1 = 100, 0 = 50?) need float or proper scaled int

Using IB_insync API.
When loading ticker.Domticks and receiving the list of ticks, the dollar amount appears to be correct, but the shares show as small integers of 0,1,3,6 etc... When they should most likely be scaled 100x... and zero is most likely for less than 100 shares. Because its not a float it can not be scaled. Does anyone know why it would be returning the shares number incorrectly? I did recently subscribe to ASX australian exchange, and noticed that the shares number came back in the thousands, so it is presumably correct. contract = Stock('AAPL', "ISLAND","USD") > contract = Stock('CBA', "ASX","AUD")
def runner(ticker):
global elements
# print(ticker.domTicks)
for i in range(100):
if i < len(ticker.domTicks):
grab = ticker.domTicks[i]
elements.append(grab)
if __name__ == "__main__":
depth = 120
time_samples = 260
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=2)
list_of_exchanges = ib.reqMktDepthExchanges()
for items in list_of_exchanges:
print(items)
print(list_of_exchanges)
contract = Stock('AAPL', "ISLAND","USD")
last_bid_book = np.zeros((0,depth))
print(last_bid_book)
last_ask_book = np.zeros((0,depth))
elements = []
ticker = ib.reqMktDepth(contract)
ib.sleep(1)
ticker.updateEvent += runner
Only round lots (and not odd lots) are typically returned with the top-of-book market data feed because the NBBO (National Best Bid/Best Offer) rules only pertain to round-lot orders.
What is an "Odd Lot" in stocks?
Odd-Lot orders are not posted to the bid/ask data on exchanges
As such the bid/ask data is returned with a multiplier that can be found in the mdSizeMultiplier field of the IBApi.ContractDetails class.

Mapping and iterating nested dictionaries

I am not too familiar with python but have a working understanding of the basics. I believe that I need dictionaries, but what I am currently doing is not working and likely very ineffective time-wise.
I am trying to create a cross matrix that links reviews between users given: the list of reviewers, their individual reviews, metadata related to the reviews.
NOTE : This is written in Python 2.7.10 - I cannot use Python 3 because outdated systems this will be run on, yada yada.
For initialization I have the following:
print '\nCompiling Review Maps... ';
LbidMap = {};
TbidMap = {};
for user in reviewer_idx :
for review in data['Reviewer Reviews'][user] :
reviewInfo = data['Review Information'][review];
stars = float(reviewInfo['stars']);
bid = reviewInfo['business_id'];
# Initialize lists where necessary
# !!!! I know this is probably not effective, but am unsure of
# a better method. Open to suggestions !!!!!
if bid not in LbidMap:
LbidMap[bid] = {};
TbidMap[bid] = {};
if stars not in LbidMap[bid] :
LbidMap[bid][stars] = {};
if user not in TbidMap[bid] :
TbidMap[bid][user] = {};
# Track information on ratings to each business
LbidMap[bid][stars][user] = review;
TbidMap[bid][user][review] = stars;
(where 'bid' is short for "Business ID", pos_list is an input given by user at runtime)
I then go on and try to create a mapping of users who gave a "positive" review to a business T who also gave business L a rating of X (e.g., 5 people rated business L 4/5 stars, how many of those people also gave a "positive" review to business T?)
For mapping I have the following:
# Determine and map all users who rated business L as rL
# and gave business T a positive rating
print '\nCross matching ratings across businesses';
cross_TrL = [];
for Tbid in TbidMap :
for Lbid in LbidMap :
# Ensure T and L aren't the same business
if Tbid != Lbid :
for stars in LbidMap[Lbid] :
starSum = len(LbidMap[Lbid][stars]);
posTbid = 0;
for user in LbidMap[Lbid][stars] :
if user in TbidMap[Tbid] :
rid = LbidMap[Lbid][stars][user];
print 'Tbid:%s Lbid:%s user:%s rid:%s'%(Tbid, Lbid, user, rid);
reviewRate = TbidMap[Tbid][user][rid];
# If true, then we have pos review for T from L
if reviewRate in pos_list :
posTbid += 1;
numerator = posTbid + 1;
denominator = starSum + 1;
probability = float(numerator) / denominator;
I currently receive the following error (print out of current vars also provided):
Tbid:OlpyplEJ_c_hFxyand_Wxw Lbid:W0eocyGliMbg8NScqERaiA user:Neal_1EVupQKZKv3NsC2DA rid:TAIDnnpBMR16BwZsap9uwA
Traceback (most recent call last):
File "run_edge_testAdvProb.py", line 90, in <module>
reviewRate = TbidMap[Tbid][user][rid];
KeyError: u'TAIDnnpBMR16BwZsap9uwA'
So, I know the KeyError is on what should be the rid (review ID) at that particular moment within TbidMap, however it seems to me that the Key was somehow not included within the first code block of initialization.
What am I doing wrong? Additionally, suggestions on how to improve clock cycles on the second code block is welcomed.
EDIT: I realized that I was trying to locate rid of Tbid using the rid from Lbid, however rid is unique to each review so you would not have a Tbid.rid == Lbid.rid.
Updated the second code block, as such:
cross_TrL = [];
for Tbid in TbidMap :
for Lbid in LbidMap :
# Ensure T and L aren't the same business
if Tbid != Lbid :
# Get numer of reviews at EACH STAR rate for L
for stars in LbidMap[Lbid] :
starSum = len(LbidMap[Lbid][stars]);
posTbid = 0;
# For each review check if user rated the Tbid
for Lreview in LbidMap[Lbid][stars] :
user = LbidMap[Lbid][stars][Lreview];
if user in TbidMap[Tbid] :
# user rev'd Tbid, get their Trid
# and see if they gave Tbid a pos rev
for Trid in TbidMap[Tbid][user] :
# Currently this does not account for multiple reviews
# given by the same person. Just want to get this
# working and then I'll minimize this
Tstar = TbidMap[Tbid][user][Trid];
print 'Tbid:%s Lbid:%s user:%s Trid:%s'%(Tbid, Lbid, user, Trid);
if Tstar in pos_list :
posTbid += 1;
numerator = posTbid + 1;
denominator = starSum + 1;
probability = float(numerator) / denominator;
evaluation = {'Tbid':Tbid, 'Lbid':Lbid, 'star':stars, 'prob':probability}
cross_TrL.append(evaluation);
Still slow, but I no longer receive the error.

Categories

Resources