I try to make full overview over my transaktions (bye/sell/deposit/withdrawl/earnings and boot trades) with python for Okx, but I get only 2 Trades (but I have made more than 2).
I have tried to send request with orders-history-archive and fetchMyTrades from CCXT Library (have tried some other functions, but I steed don't get my transactions.)
Is there some way to get full overview for Okx with python (and other Brocker/Wallets)?
here How I try to get the data with CCXT (it give only 2 outputs):
def getMyTrades(self):
tData = []
tSymboles = [
'BTC/USDT',
'ETH/USDT',
'SHIB/USDT',
'CELO/USDT',
'XRP/USDT',
'SAMO/USDT',
'NEAR/USDT',
'ETHW/USDT',
'DOGE/USDT',
'SOL/USDT',
'LUNA/USDT'
]
for item in tSymboles:
if exchange.has['fetchMyTrades']:
since = exchange.milliseconds() - 60*60*24*180*1000 # -180 days from now
while since < exchange.milliseconds():
symbol = item # change for your symbol
limit = 20 # change for your limit
orders = exchange.fetchMyTrades(symbol, since, limit)
if len(orders):
since = orders[len(orders) - 1]['timestamp'] + 1
tData += orders
else:
break
I am working with this data, which I extracted from a public playlist and shows all of the number 1's since 1953 with their audio features: https://raw.githubusercontent.com/StanWaldron/StanWaldron.github.io/main/FinalData.csv
I am now trying to loop through and find their album ids so that I can retrieve their release date and plot their audio features against other time series data, using this code:
def find_album_release(name):
album_ids = []
for x in name:
results = sp.search(q="album:" + x, type="album")
if not results["albums"]["items"]:
return []
album_id = results['albums']['items'][0]['uri']
album_ids.append(album_id)
print(album_id)
return album_ids
final = pd.read_csv('FinalData.csv')
albumlist = final['album']
finalalbums = find_album_release(albumlist)
It works for the first 7 and then returns nothing. Without the if statement, it returns that the index is out of range. I have tested the 8th element by hard coding in its album name and it returns the correct result, this is the same for the next 4 in the list so it isn't an issue with the searching of these album names. I have played around with the lists but I am not entirely sure what is out of range of what.
Any help is greatly appreciated
The 8th row album name has single quotes in its name (Don't Stop Me Eatin'). I tried to remove the quotes and it worked. Maybe you should check what characters are allowed in the query parameters.
def find_album_release(name):
album_ids = []
for x in name:
x = x.replace("'", "") # Remove the quotes from album name
results = sp.search(q="album:" + x, type="album")
....
....
final = pd.read_csv('FinalData.csv')
albumlist = final['album']
finalalbums = find_album_release(albumlist)
The output for me:
spotify:album:31lHUoHC3P6BRFzKYLyRJO
spotify:album:6s84u2TUpR3wdUv4NgKA2j
spotify:album:4OyzQQJHEfKXRfyN4QyLR7
spotify:album:2Hjcfw8zHN4dJDZJGOzLd6
spotify:album:1zEBi4O4AaY5M55dUcUp3z
spotify:album:0Hi8bTOS35xZM0zZ6S89hT
spotify:album:5GGIgiGtxIgcVJQnsKQW94
spotify:album:3rLjiJI34bHFNIFqeK3y9s
spotify:album:6q1MiYTIE28nFzjkvLLt0I
spotify:album:61ulfFSmmxMhc2wCdmdMkN
spotify:album:3euz4vS7ezKGnNSwgyvKcd
spotify:album:1pFaEu56zqpzSviJc3htZN
spotify:album:4PTxbJPI4jj0Kx8hwr1v0T
spotify:album:2ogiazbrNEx0kQHGl5ZBTQ
spotify:album:5glfCPECXSHzidU6exW8wO
spotify:album:1XMw3pBrYeXzNXZXc84DNw
spotify:album:623PL2MBg50Br5dLXC9E9e
spotify:album:4TqgXMSSTwP3RCo3MMSR6t
spotify:album:3xIwVbGJuAcovYIhzbLO3J
spotify:album:3h2xv1tJgDnJZGy5Roxs5A
spotify:album:66xP0vUo8to8ALVpkyKc41
spotify:album:6XcYTEonLIpg9NpAbJnqrC
spotify:album:5sXSHscDjBez8VF20cSyad
spotify:album:6pQZPa398NswBXGYyqHH7y
spotify:album:0488X5veBK6t3vSmIiTDJY
I'm trying to search PubMed using search terms derived from a CSV file. I've combined the search terms into a form understandable by Biopython's Entrez module, like so:
term1 = ['"' + name + " AND " + disease + '"' for name, disease in zip(names, diseases)]
where 'names' and 'diseases' refers to the parameters I'm combining into a search using eSearch.
Subsequently, to execute the search, this is the code I wrote:
from Bio import Entrez
Entrez.email = "theofficialvelocifaptor#gmail.com"
for entry in range(0, len(term1)):
handle = Entrez.esearch(db="pubmed", term=term1[entry], retmax="10")
record = Entrez.read(handle)
record["IdList"]
print("The first 10 are\n{}".format(record["IdList"]))
Now, what I'm expecting from the code is, to iterate the function over the entire list stored in term1. However, this is the output I'm getting:
['Botanical name', 'Asystasia salicifalia', 'Asystasia salicifalia', 'Asystasia salicifalia', 'Barleria strigosa', 'Justicia procumbens', 'Justicia procumbens', 'Strobilanthes auriculata', 'Thunbergia laurifolia', 'Thunbergia similis']
['Disease', 'Puerperal illness', 'Puerperium', 'Puerperal disorder', 'Tonic', 'Lumbago', 'Itching', 'Malnutrition', 'Detoxificant', 'Tonic']
The first 10 are
['31849133', '31751652', '31359527', '31178344', '31057654', '30725751', '28476677', '27798405', '27174082', '26923540']
The first 10 are
[]
The first 10 are
[]
The first 10 are
[]
The first 10 are
[]
The first 10 are
[]
The first 10 are
The first 10 are
[]
The first 10 are
[]
The first 10 are
[]
Surely, there's something I'm missing, because the iteration seems to be shorting out prematurely. I've been at it for a solid 5 hours at the time of writing, and I feel very silly. I should also mention that I am new to Python, so if I'm making any obvious mistakes, I don't see it.
Your loop is working fine, there are no pubmed results for the last 9 term/disease combinations.
I'd like to find a tool that does a good job of fuzzy matching URLs that are the same expecting extra parameters. For instance, for my use case, these two URLs are the same:
atest = (http://www.npr.org/templates/story/story.php?storyId=4231170', 'http://www.npr.org/templates/story/story.php?storyId=4231170&sc=fb&cc=fp)
At first blush, fuzz.partial_ratio and fuzz.token_set_ratio fuzzywuzzy get the job done with a 100 threshold:
ratio = fuzz.ratio(atest[0], atest[1])
partialratio = fuzz.partial_ratio(atest[0], atest[1])
sortratio = fuzz.token_sort_ratio(atest[0], atest[1])
setratio = fuzz.token_set_ratio(atest[0], atest[1])
print('ratio: %s' % (ratio))
print('partialratio: %s' % (partialratio))
print('sortratio: %s' % (sortratio))
print('setratio: %s' % (setratio))
>>>ratio: 83
>>>partialratio: 100
>>>sortratio: 83
>>>setratio: 100
But this approach fails and returns 100 in other cases, like:
atest('yahoo.com','http://finance.yahoo.com/news/earnings-preview-monsanto-report-2q-174000816.html')
The URLs in my data and the parameters added vary a great deal. I interested to know if anyone has a better approach using url parsing or similar?
If all you want is check that all query parameters in the first URL are present in the second URL, you can do it in a simpler way by just doing set difference:
import urllib.parse as urlparse
base_url = 'http://www.npr.org/templates/story/story.php?storyId=4231170'
check_url = 'http://www.npr.org/templates/story/story.php?storyId=4231170&sc=fb&cc=fp'
base_url_parameters = set(urlparse.parse_qs(urlparse.urlparse(base_url).query).keys())
check_url_parameters = set(urlparse.parse_qs(urlparse.urlparse(check_url).query).keys())
print(base_url_parameters - check_url_parameters)
This will return an empty set, but if you change the base url to something like
base_url = 'http://www.npr.org/templates/story/story.php?storyId=4231170&test=1'
it will return {'test'}, which means that there are extra parameters in the base URL that are missing from the second URL.
schools=['GSGS','GSGL','JKG','JMG','MCGD','MANGD','SLSA','WHGR','WOG','GCG','LP',
'PGG', 'WVSG', 'ASGE','CZG', 'EAG','GI']
for i in range (1,17):
gmaps = googlemaps.Client(key='')
distances = gmaps.distance_matrix((GSGS), (schools), mode="driving"['rows'][0]['elements'][0]['distance']['text']
print(distances)
The elements of the list are schools. I didn't want to make the list to long so I used these abbreviations.
I want to get all the distances between "GSGS" and the schools in the list. I don't know what to write inside the second bracket.
distances = gmaps.distance_matrix((GSGS), (schools)
If I run it like that, it outputs this error:
Traceback (most recent call last):
File "C:/Users/helpmecoding/PycharmProjects/untitled/distance.py", line 31, in
<module>
distances = gmaps.distance_matrix((GSGS), (schools), mode="driving")['rows'][0]['elements'][0]['distance']['text']
KeyError: 'distance'
I could do it one for one but thats not what I want. If I write another school from the list schools and delete the for loop it works fine.
I know I have to do a loop so that it cycles trough the list, but I don't know how to do it. Behind every variable for example "GSGS" is the address/location from the school.
I deleted the key just for safety.
My Dad helped me and we solved the problem. Now i have what i want :) Now i have to do a list with all distances between the schools. And if i got that i have to do the Dijkstra Algorithm to find the shortest route between them. Thanks for helping!
import googlemaps
GSGS = (address)
GSGL = (address)
. . .
. . .
. . .
schools =
(GSGS,GSGL,JKG,JMG,MCGD,MANGD,SLSA,WHGR,WOG,GCG,LP,PGG,WVSG,ASGE,CZG,EAG,GI)
school_names = ("GSGS","GSGL","JKG","JMG","MCGD","MANGD","SLSA","WHGR","WOG","GCG","LP","PGG","WVSG","ASGE","CZG","EAG","GI")
school_distances = ()
for g in range(0,len(schools)):
n = 0
for i in schools:
gmaps = googlemaps.Client(key='TOPSECRET')
distances = gmaps.distance_matrix(schools[g], i)['rows'][0]['elements'][0]['distance']['text']
if school_names[g] != school_names[n]:
print(school_names[g] + " - " + school_names[n] + " " + distances)
else:
print(school_names[g] + " - " + school_names[n] + " " + "0 km")
n = n + 1
In my experience, it is sometimes difficult to know what is going on when you use a third-party api. Though I am not a proponent of reinventing the wheel sometimes it is necessary to get a full picture of what is going on. So, I recommend giving it a shot building your own api endpoint request call and see if that works.
import requests
schools = ['GSGS','GSGL','JKG','JMG','MCGD','MANGD','SLSA','WHGR','WOG','GCG','LP','PGG', 'WVSG', 'ASGE','CZG', 'EAG','GI']
def gmap_dist(apikey, origins, destinations, **kwargs):
units = kwargs.get("units", "imperial")
mode = kwargs.get("mode", "driving")
baseurl = "https://maps.googleapis.com/maps/api/distancematrix/json?"
urlargs = {"key": apikey, "units": units, "origins": origins, "destinations": destinations, "mode": mode}
req = requests.get(baseurl, params=urlargs)
data = req.json()
print(data)
# do this for each key and index pair until you
# find the one causing the problem if it
# is not immediately evident from the whole data print
print(data["rows"])
print(rows[0])
# Check if there are elements
try:
distances = data['rows'][0]['elements'][0]['distance']
except KeyError:
raise KeyError("No elements found")
except IndexError:
raise IndexError("API Request Error. No response returned")
else:
return distances
Also as a general rule of thumb it is good to have a test case to make sure things are working as they should before testing the whole list,
#test case
try:
test = gmap_dist(apikey="", units="imperial", origins="GSGS", destinations="GSGL", mode="driving")
except Exception as err:
raise Exception(err)
else:
dists = gmap_dist(apikey="", units="imperial", origins="GSGS", destinations=schools, mode="driving")
print(dists)
Lastly, if you are testing the distance from "GSGS" to other schools, then you might want to get it out of your list of schools as the distance will be 0.
Now, I suspect that the reason you are getting this exception is because there are no json elements returned. Probably, because one of your parameters was improperly formatted.
If this function returns a KeyError still. Check the address spelling and make sure your apikey is valid. Although if it was the Apikey I would expect they would not bother to give you even empty results.
Hope this helps. Comment if it doesn't work.