I am using uniswap python api to get live token prices. I am using all the variation of the builtin functions. However, it does not give me the right value.
HERE IS MY CODE
address = "0x0000000000000000000000000000000000000000"
private_key = None
uniswap_wrapper = Uniswap(address, private_key,infura_url,version=2)
dai = "0x89d24A6b4CcB1B6fAA2625fE562bDD9a23260359"
print(uniswap_wrapper.get_eth_token_input_price(dai, 5*10**18))
print(uniswap_wrapper.get_token_eth_input_price(dai, 5*10**18))
print(uniswap_wrapper.get_eth_token_output_price(dai, 5*10**18))
print(uniswap_wrapper.get_token_eth_output_price(dai, 5*10**18))
And these are my results respectively,
609629848330146249678
24997277527023953
25306950626771242
2676124437498249933489
I don't want to use coingecko or coinmarketcaps api as they do not list newly released token prices immediately.
I tried etherscan to get token prices but it does not have a built-in function for that. Does anybody any suggestions on how to fix that or do you know any alternatives?
I don't have the time or setup to test this right now, but I believe that what you want is something like this:
print(uniswap_wrapper.get_eth_token_input_price(dai, 5*10**18)/5*10**18)
print(uniswap_wrapper.get_token_eth_input_price(dai, 5*10**18)/5*10**18)
print(uniswap_wrapper.get_eth_token_output_price(dai, 5*10**18)/5*10**18)
print(uniswap_wrapper.get_token_eth_output_price(dai, 5*10**18)/5*10**18)
Related
I am learning Python3 and I have a fairly simple task to complete but I am struggling how to glue it all together. I need to query an API and return the full list of applications which I can do and I store this and need to use it again to gather more data for each application from a different API call.
applistfull = requests.get(url,authmethod)
if applistfull.ok:
data = applistfull.json()
for app in data["_embedded"]["applications"]:
print(app["profile"]["name"],app["guid"])
summaryguid = app["guid"]
else:
print(applistfull.status_code)
I next have I think 'summaryguid' and I need to again query a different API and return a value that could exist many times for each application; in this case the compiler used to build the code.
I can statically call a GUID in the URL and return the correct information but I haven't yet figured out how to get it to do the below for all of the above and build a master list:
summary = requests.get(f"url{summaryguid}moreurl",authmethod)
if summary.ok:
fulldata = summary.json()
for appsummary in fulldata["static-analysis"]["modules"]["module"]:
print(appsummary["compiler"])
I would prefer to not yet have someone just type out the right answer but just drop a few hints and let me continue to work through it logically so I learn how to deal with what I assume is a common issue in the future. My thought right now is I need to move my second if up as part of my initial block and continue the logic in that space but I am stuck with that.
You are on the right track! Here is the hint: the second API request can be nested inside the loop that iterates through the list of applications in the first API call. By doing so, you can get the information you require by making the second API call for each application.
import requests
applistfull = requests.get("url", authmethod)
if applistfull.ok:
data = applistfull.json()
for app in data["_embedded"]["applications"]:
print(app["profile"]["name"],app["guid"])
summaryguid = app["guid"]
summary = requests.get(f"url/{summaryguid}/moreurl", authmethod)
fulldata = summary.json()
for appsummary in fulldata["static-analysis"]["modules"]["module"]:
print(app["profile"]["name"],appsummary["compiler"])
else:
print(applistfull.status_code)
I currently use the function get_users_following(user_id_account, max_results=1000) to get the list of follows of an account, to know who he follows on twitter. So far it works well as long as the user follows less than 1000 people because the API limits to a list of maximum 1000 users. The problem is that when he follows more than 1000 people I can't get the last people. The function always gives me the first 1000 and ignores the last ones
https://docs.tweepy.org/en/stable/client.html#tweepy.Client.get_users_followers
https://developer.twitter.com/en/docs/twitter-api/users/follows/api-reference/get-users-id-following
There is a pagination_token parameter but I don't know how to use it. What I want is just the last X new people followed so I can add them to a database and get a notification for each new entry
client = tweepy.Client(api_token)
response = client.get_users_following(id=12345, max_results=1000)
Is it possible to go directly to the last page?
Tweepy handles the pagination with the Paginator class (see the documentation here).
For example, if you want to see all the pages, you could do something like that:
# Use the wait_on_rate_limit argument if you don't handle the exception yourself
client = tweepy.Client(api_token, wait_on_rate_limit=True)
# Instantiate a new Paginator with the Tweepy method and its arguments
paginator = tweepy.Paginator(client.get_users_following, 12345, max_results=1000)
for response_page in paginator:
print(response_page)
Or you could also directly get the full list of the user's followings:
# Instantiate a new Paginator with the Tweepy method and its arguments
paginator = tweepy.Paginator(client.get_users_following, 12345, max_results=1000)
for user in paginator.flatten(): # Without a limit argument, it gets all users
print(user.id)
Is there a way to check if the user is blocked like an is_blocked method? The way I have it set up right now is to get a list of my blocked users and comparing the author of a Tweet to the list, but it's highly inefficient, as it constantly runs into the rate limit.
Relevent Code:
blocked_screen_names = [b.screen_name for b in tweepy.Cursor(api.blocks).items()]
for count, tweet in enumerate(tweepy.Cursor(api.search, q = query, lang = 'en', result_type = 'recent', tweet_mode = 'extended').items(500)):
if tweet.user.screen_name in blocked_screen_names:
continue
No, you'll have to do that the way you're currently doing it.
(Also, just a side note, you're better off checking your blocked users via their account ID rather than their screen name, because this value will not change, whereas a user's screen name can)
For future reference, just check the Twitter API documentation, where you can get the answer for something like this straight away :) save yourself the waiting for someone to answer it for you here!
You'll notice that both the documentation for V1 and V2 do not contain an attribute as you have described:
V1 User Object:
https://developer.twitter.com/en/docs/twitter-api/v1/data-dictionary/object-model/tweet
V2 User Object:
https://developer.twitter.com/en/docs/twitter-api/data-dictionary/object-model/user
The free user for AlchemyAPI can call 1000 requests a day (http://www.alchemyapi.com/products/pricing/).
I have been accessing the API with python as such:
from alchemyapi import AlchemyAPI
demo_text = 'Yesterday dumb Bob destroyed my fancy iPhone in beautiful Denver, Colorado. I guess I will have to head over to the Apple Store and buy a new one.'
response = alchemyapi.keywords('text', demo_text)
json_output = json.dumps(response, indent=4)
print json_output
I know I ran out of calls since the requests were response returning None.
How do I check how many calls I have left through the python interface?
Will the check count as one request?
You can use alchemy_calls_left(api_key) function from Here
and no it won't count as a call itself.
This URL will return you the daily call used info.
Replace the API_KEY with your key.
http://access.alchemyapi.com/calls/info/GetAPIKeyInfo?apikey=API_KEY&outputMode=json
You could keep a local variable that would keep track of the number of API calls and would reset when the date changes using datetime.date from date module.
You can also use this Java API as follows:
AlchemyAPI alchemyObj = AlchemyAPI.GetInstanceFromFile("/../AlchemyAPI/testdir/api_key.txt");
AlchemyAPI_NamedEntityParams params= new AlchemyAPI_NamedEntityParams();
params.setQuotations(true); // for instance, enable quotations
Document doc = alchemyObj.HTMLGetRankedNamedEntities(htmlString, "http://news-site.com", params);
The last call will cause an IOException (if you exceed the allowed calls for a given day) and the message will be "Error making API call: daily-transaction-limit-exceeded."
You can then catch it, wait for 24 hours and re-try.
Using the GData Calendar API via App Engine in Python, when you create an event there are handy little helper methods to parse the response:
new_event = calendar_service.InsertEvent(event, '/calendar/feeds/default/private/full')
helper = new_event.GetEditLink().href
When you create a new calendar:
new_calendar = gd_client.InsertCalendar(new_calendar=calendar)
I was wondering if there might be related methods that I just can't find in the documentation (or that are--perhaps--undocumented)?
I need to store the new calendar's ID in the datastore, so I would like something along the lines of:
new_calendar = gd_client.InsertCalendar(new_calendar=calendar)
new_calendar.getGroupLink().href
In my code, the calendar is being created, and G is returning the Atom response with a 201, but before I get into using elementtree or atom.parse to extract the desired element, I was hoping someone here might be able to help.
Many thanks in advance :)
I've never used the GData API, so I could be wrong, but...
It looks like GetLink() will return the link object for any specified rel. Seems like GetEditLink() just calls GetLink(), passing in the rel of the Edit link. So you should be able to call GetLink() on the response from InsertCalendar(), and pass in the rel of the Group link.
Here's the pydoc info that I used to figure this out: http://gdata-python-client.googlecode.com/svn/trunk/pydocs/gdata.calendar_resource.data.html