I am trying to get a stock price response however I am getting different results. Here is my attempt
import requests
url = "https://yahoo-finance-low-latency.p.rapidapi.com/v6/finance/quote"
querystring = {"symbols":"AAPL"}
headers = {
'x-rapidapi-key': "xxx",
'x-rapidapi-host': "yahoo-finance-low-latency.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)
Here is the Documentation. Basically, the script is the same as the template yet it is not showing minute-by-minute price data. I am expecting a response similar to this one ...
I think you want to use the chart API:
url = "https://yahoo-finance-low-latency.p.rapidapi.com/v8/finance/chart/AAPL"
querystring = {"interval":"1m"}
headers = {
'x-rapidapi-key': "XXX",
'x-rapidapi-host': "yahoo-finance-low-latency.p.rapidapi.com"
}
api_result = requests.request("GET", url, headers=headers, params=querystring)
api_response = api_result.json()
print(api_response)
Related
I already look in stackoverflow and I could not find an answer to my problem.
I'm accessing an API from the German Government that has a output limit of 10.000 entries. I want all data from a specific city, and since there is more than 10.000 entries in the original database, I need to "do the query" while doing the requests.post.
Here is one entry of Json result, when I simply do request.post to this API:
{
"results":[
{
"_id":"CXPTYYFY807",
"CREATED_AT":"2019-12-17T14:48:17.130Z",
"UPDATED_AT":"2019-12-17T14:48:17.130Z",
"result":{
"id":"CXPTYYFY807",
"title":"Bundesstadt Bonn, SGB-315114, Ortsteilzentrum Brüser Berg, Fliesenarbeiten",
"description":["SGB-315114","Ortsteilzentrum Brüser Berg, Fliesenarbeiten"],
"procedure_type":"Ex ante Veröffentlichung (§ 19 Abs. 5)",
"order_type":"VOB",
"publication_date":"",
"cpv_codes":["45431000-7","45431100-8"],
"buyer":{
"name":"Bundesstadt Bonn, Referat Vergabedienste",
"address":"Berliner Platz 2",
"town":"Bonn",
"postal_code":"53111"},
"seller":{
"name":"",
"town":"",
"country":""
},
"geo":{
"lon":7.0944,
"lat":50.73657
},
"value":"",
"CREATED_AT":"2019-12-17T14:48:17.130Z",
"UPDATED_AT":"2019-12-17T14:48:17.130Z"}
}
],
"aggregations":{},
"pagination":{
"total":47389,
"start":0,
"end":0 }}
What I want is all the data which was bought in "town" : "Bonn"
What I already tryed:
import requests
url = 'https://daten.vergabe.nrw.de/rest/evergabe/aggregation_search'
headers = {'Accept': 'application/json', 'Content-Type': 'application/json'}
data = {"results": [{"result": {"buyer": {"town":"Bonn"}}}]}
#need to put the size limit, otherwise he delivers me less:
params = {'size': 10000}
req = requests.post(url, params=params, headers=headers, json=data)
This returns me the post, but not "filtered" by city.
I also tryed req = requests.post(url, params=params, headers=headers, data=data) , which returns me ERROR 400 .
Another way is to grab all the data with the pagination parameters on the end of the json code within a loop, but again I'm not being able to writwe down the json path to the pagination, for example : start: 0 , end:500
Can anyone help me solving it?
Try:
url = 'https://daten.vergabe.nrw.de/rest/evergabe/aggregation_search'
headers = {'Accept': 'application/json', 'Content-Type': 'application/json'}
query1 = {
"query": {
"match": {
"buyer.town": "Bonn"
}
}
}
req = requests.post(url, headers=headers, json=query1)
# Check the output
req.text
Edit:
This won't work if the filter matches with more than 10.000 results, but it may be a quick workaround to the problem you are facing.
import json
import requests
import math
url = "https://daten.vergabe.nrw.de/rest/vmp_rheinland"
size = 5000
payload = '{"sort":[{"_id":"asc"}],"query":{"match_all":{}},"size":'+str(size)+'}'
headers = {
'accept': "application/json",
'content-type': "application/json"
'cache-control': "no-cache"
}
response = requests.request("POST", url, data=payload, headers=headers)
tenders_array = []
query_data = json.loads(response.text)
tenders_array.extend(query_data['results'])
total_hits = query_data['pagination']['total']
result_size = len(query_data['results'])
last_id = query_data['results'][-1]["_id"]
number_of_loops = ((total_hits - size) // size )
last_loop_size = ((total_hits - size) % size)
for i in range(number_of_loops+1):
if i == number_of_loops:
size=last_loop_size
payload = '{"sort":[{"_id":"asc"}],"query":{"match_all":{}},"size":'+str(size)+',"search_after":["'+last_id+'"]}'
response = requests.request("POST", url, data=payload, headers=headers)
query_data = json.loads(response.text)
result_size = len(query_data['results'])
if result_size > 0:
tenders_array.extend(query_data['results'])
last_id = query_data['results'][-1]["_id"]
else:
break
https://gist.github.com/thiagoalencar/34401e204358499ea3b9aa043a18395f
code in the gist.
Some code to paginate through elasticsearch API. This is an API over the elasticsearch API, and the docs where not so clear. Tried scroll, no sucess. This solutions uses search_after parameter without point in time, because the endpoint is not available. Some times the servers refuses the request and it is necessary to verify with response.status_code==502.
The code is messy and need refactoring. But it works. The final tenders_array contains all objects.
I'm trying to automate a process in which i have to download some brazilian fund quotes from Anbima (Brazil regulator). I have been able to work around the first steps to retrieve the access token but i don't know how to use the token in order to make requests. Here is the tutorial website https://developers.anbima.com.br/en/como-acessar-nossas-apis/.
I have tried a lot of thing but all i get from the request is 'Could not find a required APP in the request, identified by HEADER client_id.'
If someone could share some light. Thank you in advance.
import requests
import base64
import json
requests.get("https://api.anbima.com.br/feed/fundos/v1/fundos")
ClientID = '2Xy1ey11****'
ClientSecret = 'faStF1Hc****'
codeString = ClientID + ":" + ClientSecret
codeStringBytes = codeString.encode('ascii')
base64CodeBytes = base64.b64encode(codeStringBytes)
base64CodeString = base64CodeBytes.decode('ascii')
url = "https://api.anbima.com.br/oauth/access-token"
headers = {
'content-type': 'application/json'
,'authorization': f'Basic {base64CodeString}'
}
body = {
"grant_type": "client_credentials"
}
r = requests.post(url=url, data=json.dumps(body), headers=headers, allow_redirects=True)
jsonDict = r.json()
##################
urlFundos = "https://api-sandbox.anbima.com.br/feed/precos-indices/v1/titulos-publicos/mercado-secundario-TPF"
token = jsonDict['access_token']
headers2 = {
'content-type': 'application/json'
,'authorization': f'Bearer {token}'
}
r2 = requests.get(url=urlFundos, headers=headers2)
r2.status_code
r2.text
I was having the same problem, but today I could advance. I believe you need to adjust some parameters in the header.
Follows the piece of code I developed.
from bs4 import BeautifulSoup
import requests
PRODUCTION_URL = 'https://api.anbima.com.br'
SANDBOX_URL = 'https://api-sandbox.anbima.com.br'
API_URL = '/feed/fundos/v1/fundos/'
CODIGO_FUNDO = '594733'
PRODUCTION = False
if PRODUCTION:
URL = PRODUCTION_URL
else:
URL = SANDBOX_URL
URL = URL + API_URL + CODIGO_FUNDO
HEADER = {'access_token': 'your token',
'client_id' : 'your client ID'}
html = requests.get(URL, headers=HEADER).content
soup = BeautifulSoup(html, 'html.parser')
print(soup.prettify())
The sandbox API will return a dummy JSON. To access the production API you will need to request access (I'm trying to do this just now).
url = 'https://api.anbima.com.br/oauth/access-token'
http = 'https://api-sandbox.anbima.com.br/feed/precos-indices/v1/titulos-publicos/pu-intradiario'
client_id = "oLRa*******"
client_secret = "6U2nefG*****"
client_credentials = "oLRa*******:6U2nefG*****"
client_credentials = client_credentials.encode('ascii')
senhabytes = base64.b64encode(client_credentials)
senha = base64.b64decode(senhabytes)
print(senhabytes, senha)
body = {
"grant_type": "client_credentials"
}
headers = {
'content-type': 'application/json',
'Authorization': 'Basic b0xSYTJFSUlOMWR*********************'
}
request = requests.post(url, headers=headers, json=body, allow_redirects=True)
informacoes = request.json()
token = informacoes['access_token']
headers2 = {
"content-type": "application/json",
"client_id": f"{client_id}",
"access_token": f"{token}"
}
titulos = requests.get(http, headers=headers2)
titulos = fundos.json()
I used your code as a model, then I've made some changes. I've printed the encode client_id:client_secret and then I've copied and pasted in the headers.
I've changed the data for json.
I am looking to loop through about 5 stock tickers using an API. Currently I have "MSFT" as the only stock being called; however, I would like to make a stock list to return multiple responses.
For example:
stock_list = ["MSFT", "AAPL", "LMD", "TSLA", "FLGT"]
How can I request all 5 of these stocks to the querystring to print each response? Here is what I have currently which prints only "MSFT" into a json format...
import requests
#Use RapidAPI request to call info on Stocks
url = "https://alpha-vantage.p.rapidapi.com/query"
querystring = {"function":"GLOBAL_QUOTE","symbol": "MSFT"}
headers = {
'x-rapidapi-key': "KEY INSERTED HERE,
'x-rapidapi-host': "alpha-vantage.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers, params=querystring)
Try using a for loop.
import requests
url = 'https://alpha-vantage.p.rapidapi.com/query'
headers = {
'x-rapidapi-key': '<API KEY>',
'x-rapidapi-host': 'alpha-vantage.p.rapidapi.com',
}
tickers = ['MSFT', 'AAPL', 'LMD', 'TSLA', 'FLGT']
for ticker in tickers:
querystring = {'function': 'GLOBAL_QUOTE', 'symbol': ticker}
r = requests.get(url, headers=headers, params=querystring)
print(r.json())
You can also try pretty printing the json output using the json module.
import json
# ... your code ...
for ticker in tickers:
# ... your code ...
print(json.dumps(r.json(), indent=2))
Also, you should delete your API key before its abused by anyone! These have to be kept safe somewhere.
I have an array of ice cream flavors I want to iterate over for an API GET request. How do I loop through an array such as [vanilla, chocolate, strawberry] using the standard API request below?
import requests
url = "https://fakeurl.com/values/icecreamflavor/chocolate?"
payload = {}
headers = {
'Authorization': 'Bearer (STRING)',
'(STRING)': '(STRING)'
}
response = requests.request("GET", url, headers=headers, data = payload)
my_list = (response.text.encode('utf8'))
You could probably try string formatting on your url. You could loop through your array of ice-cream flavors, change the url in each loop and perform API GET request on the changed url.
import requests
iceCreamFlavors = ["vanilla", "chocolate", "strawberry"]
url = "https://fakeurl.com/values/icecreamflavor/{flavor}?"
payload = {}
headers = {
'Authorization': 'Bearer (STRING)',
'(STRING)': '(STRING)'
}
my_list = []
for flavor in iceCreamFlavors:
response = requests.request("GET", url.format(flavor=flavor), headers=headers, data = payload)
my_list.append(response.text.encode('utf8'))
Can anyone please tell me how to write a cURL to get events (only modified) list with nextSyncToken?
This is my code that's not working:
def get_headers():
headers = {
#'Content-Type': "application/json",
'Authorization': access_token_json
}
return headers
def get_nexttokensync_list_event():
url_get_list_event = "https://www.googleapis.com/calendar/v3/calendars/id#gmail.com/events"
querystring = {"nextSyncToken": "CMCEh************jd4CGAU="}
response = requests.request("GET", url_get_list_event, headers=headers, params=querystring)
json_event_list_formatted = response.text
print(json_event_list_formatted)
Yes, i've done it!
Here is my code:
import requests
url = "https://www.googleapis.com/calendar/v3/calendars/here_calendar_id/events"
querystring = {"syncToken":"here_token"}
headers = {
'Content-Type': "application/json",
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)