Google calendar- get modified events with nextSyncToken via API rest - python

Can anyone please tell me how to write a cURL to get events (only modified) list with nextSyncToken?
This is my code that's not working:
def get_headers():
headers = {
#'Content-Type': "application/json",
'Authorization': access_token_json
}
return headers
def get_nexttokensync_list_event():
url_get_list_event = "https://www.googleapis.com/calendar/v3/calendars/id#gmail.com/events"
querystring = {"nextSyncToken": "CMCEh************jd4CGAU="}
response = requests.request("GET", url_get_list_event, headers=headers, params=querystring)
json_event_list_formatted = response.text
print(json_event_list_formatted)

Yes, i've done it!
Here is my code:
import requests
url = "https://www.googleapis.com/calendar/v3/calendars/here_calendar_id/events"
querystring = {"syncToken":"here_token"}
headers = {
'Content-Type': "application/json",
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)

Related

Python Requests Post within a nested Json - retrieve data with a specific value

I already look in stackoverflow and I could not find an answer to my problem.
I'm accessing an API from the German Government that has a output limit of 10.000 entries. I want all data from a specific city, and since there is more than 10.000 entries in the original database, I need to "do the query" while doing the requests.post.
Here is one entry of Json result, when I simply do request.post to this API:
{
"results":[
{
"_id":"CXPTYYFY807",
"CREATED_AT":"2019-12-17T14:48:17.130Z",
"UPDATED_AT":"2019-12-17T14:48:17.130Z",
"result":{
"id":"CXPTYYFY807",
"title":"Bundesstadt Bonn, SGB-315114, Ortsteilzentrum Brüser Berg, Fliesenarbeiten",
"description":["SGB-315114","Ortsteilzentrum Brüser Berg, Fliesenarbeiten"],
"procedure_type":"Ex ante Veröffentlichung (§ 19 Abs. 5)",
"order_type":"VOB",
"publication_date":"",
"cpv_codes":["45431000-7","45431100-8"],
"buyer":{
"name":"Bundesstadt Bonn, Referat Vergabedienste",
"address":"Berliner Platz 2",
"town":"Bonn",
"postal_code":"53111"},
"seller":{
"name":"",
"town":"",
"country":""
},
"geo":{
"lon":7.0944,
"lat":50.73657
},
"value":"",
"CREATED_AT":"2019-12-17T14:48:17.130Z",
"UPDATED_AT":"2019-12-17T14:48:17.130Z"}
}
],
"aggregations":{},
"pagination":{
"total":47389,
"start":0,
"end":0 }}
What I want is all the data which was bought in "town" : "Bonn"
What I already tryed:
import requests
url = 'https://daten.vergabe.nrw.de/rest/evergabe/aggregation_search'
headers = {'Accept': 'application/json', 'Content-Type': 'application/json'}
data = {"results": [{"result": {"buyer": {"town":"Bonn"}}}]}
#need to put the size limit, otherwise he delivers me less:
params = {'size': 10000}
req = requests.post(url, params=params, headers=headers, json=data)
This returns me the post, but not "filtered" by city.
I also tryed req = requests.post(url, params=params, headers=headers, data=data) , which returns me ERROR 400 .
Another way is to grab all the data with the pagination parameters on the end of the json code within a loop, but again I'm not being able to writwe down the json path to the pagination, for example : start: 0 , end:500
Can anyone help me solving it?
Try:
url = 'https://daten.vergabe.nrw.de/rest/evergabe/aggregation_search'
headers = {'Accept': 'application/json', 'Content-Type': 'application/json'}
query1 = {
"query": {
"match": {
"buyer.town": "Bonn"
}
}
}
req = requests.post(url, headers=headers, json=query1)
# Check the output
req.text
Edit:
This won't work if the filter matches with more than 10.000 results, but it may be a quick workaround to the problem you are facing.
import json
import requests
import math
url = "https://daten.vergabe.nrw.de/rest/vmp_rheinland"
size = 5000
payload = '{"sort":[{"_id":"asc"}],"query":{"match_all":{}},"size":'+str(size)+'}'
headers = {
'accept': "application/json",
'content-type': "application/json"
'cache-control': "no-cache"
}
response = requests.request("POST", url, data=payload, headers=headers)
tenders_array = []
query_data = json.loads(response.text)
tenders_array.extend(query_data['results'])
total_hits = query_data['pagination']['total']
result_size = len(query_data['results'])
last_id = query_data['results'][-1]["_id"]
number_of_loops = ((total_hits - size) // size )
last_loop_size = ((total_hits - size) % size)
for i in range(number_of_loops+1):
if i == number_of_loops:
size=last_loop_size
payload = '{"sort":[{"_id":"asc"}],"query":{"match_all":{}},"size":'+str(size)+',"search_after":["'+last_id+'"]}'
response = requests.request("POST", url, data=payload, headers=headers)
query_data = json.loads(response.text)
result_size = len(query_data['results'])
if result_size > 0:
tenders_array.extend(query_data['results'])
last_id = query_data['results'][-1]["_id"]
else:
break
https://gist.github.com/thiagoalencar/34401e204358499ea3b9aa043a18395f
code in the gist.
Some code to paginate through elasticsearch API. This is an API over the elasticsearch API, and the docs where not so clear. Tried scroll, no sucess. This solutions uses search_after parameter without point in time, because the endpoint is not available. Some times the servers refuses the request and it is necessary to verify with response.status_code==502.
The code is messy and need refactoring. But it works. The final tenders_array contains all objects.

Trying to retrieve data from the Anbima API

I'm trying to automate a process in which i have to download some brazilian fund quotes from Anbima (Brazil regulator). I have been able to work around the first steps to retrieve the access token but i don't know how to use the token in order to make requests. Here is the tutorial website https://developers.anbima.com.br/en/como-acessar-nossas-apis/.
I have tried a lot of thing but all i get from the request is 'Could not find a required APP in the request, identified by HEADER client_id.'
If someone could share some light. Thank you in advance.
import requests
import base64
import json
requests.get("https://api.anbima.com.br/feed/fundos/v1/fundos")
ClientID = '2Xy1ey11****'
ClientSecret = 'faStF1Hc****'
codeString = ClientID + ":" + ClientSecret
codeStringBytes = codeString.encode('ascii')
base64CodeBytes = base64.b64encode(codeStringBytes)
base64CodeString = base64CodeBytes.decode('ascii')
url = "https://api.anbima.com.br/oauth/access-token"
headers = {
'content-type': 'application/json'
,'authorization': f'Basic {base64CodeString}'
}
body = {
"grant_type": "client_credentials"
}
r = requests.post(url=url, data=json.dumps(body), headers=headers, allow_redirects=True)
jsonDict = r.json()
##################
urlFundos = "https://api-sandbox.anbima.com.br/feed/precos-indices/v1/titulos-publicos/mercado-secundario-TPF"
token = jsonDict['access_token']
headers2 = {
'content-type': 'application/json'
,'authorization': f'Bearer {token}'
}
r2 = requests.get(url=urlFundos, headers=headers2)
r2.status_code
r2.text
I was having the same problem, but today I could advance. I believe you need to adjust some parameters in the header.
Follows the piece of code I developed.
from bs4 import BeautifulSoup
import requests
PRODUCTION_URL = 'https://api.anbima.com.br'
SANDBOX_URL = 'https://api-sandbox.anbima.com.br'
API_URL = '/feed/fundos/v1/fundos/'
CODIGO_FUNDO = '594733'
PRODUCTION = False
if PRODUCTION:
URL = PRODUCTION_URL
else:
URL = SANDBOX_URL
URL = URL + API_URL + CODIGO_FUNDO
HEADER = {'access_token': 'your token',
'client_id' : 'your client ID'}
html = requests.get(URL, headers=HEADER).content
soup = BeautifulSoup(html, 'html.parser')
print(soup.prettify())
The sandbox API will return a dummy JSON. To access the production API you will need to request access (I'm trying to do this just now).
url = 'https://api.anbima.com.br/oauth/access-token'
http = 'https://api-sandbox.anbima.com.br/feed/precos-indices/v1/titulos-publicos/pu-intradiario'
client_id = "oLRa*******"
client_secret = "6U2nefG*****"
client_credentials = "oLRa*******:6U2nefG*****"
client_credentials = client_credentials.encode('ascii')
senhabytes = base64.b64encode(client_credentials)
senha = base64.b64decode(senhabytes)
print(senhabytes, senha)
body = {
"grant_type": "client_credentials"
}
headers = {
'content-type': 'application/json',
'Authorization': 'Basic b0xSYTJFSUlOMWR*********************'
}
request = requests.post(url, headers=headers, json=body, allow_redirects=True)
informacoes = request.json()
token = informacoes['access_token']
headers2 = {
"content-type": "application/json",
"client_id": f"{client_id}",
"access_token": f"{token}"
}
titulos = requests.get(http, headers=headers2)
titulos = fundos.json()
I used your code as a model, then I've made some changes. I've printed the encode client_id:client_secret and then I've copied and pasted in the headers.
I've changed the data for json.

Yahoo Finance API not showing real time quote price

I am trying to get a stock price response however I am getting different results. Here is my attempt
import requests
url = "https://yahoo-finance-low-latency.p.rapidapi.com/v6/finance/quote"
querystring = {"symbols":"AAPL"}
headers = {
'x-rapidapi-key': "xxx",
'x-rapidapi-host': "yahoo-finance-low-latency.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)
Here is the Documentation. Basically, the script is the same as the template yet it is not showing minute-by-minute price data. I am expecting a response similar to this one ...
I think you want to use the chart API:
url = "https://yahoo-finance-low-latency.p.rapidapi.com/v8/finance/chart/AAPL"
querystring = {"interval":"1m"}
headers = {
'x-rapidapi-key': "XXX",
'x-rapidapi-host': "yahoo-finance-low-latency.p.rapidapi.com"
}
api_result = requests.request("GET", url, headers=headers, params=querystring)
api_response = api_result.json()
print(api_response)

Hot to split requests in many rows in python?

Code one
import requests
url = "http://store.place.com.br/api/oms/pvt/orders"
headers = {
'accept': "application/json",
'content-type': "application/json",
'x-vtex-api-apptoken': "{{VTEX-API-TOKEN}}",
'x-vtex-api-appkey': "{{VTEX-API-KEY}}"
}
response = requests.request("GET", url, headers=headers)
print(response.text)
Code two
url = "http://store.place.com.br/api/oms/pvt/orders/oderId"
headers = {
'accept': "application/json",
'content-type': "application/json",
'x-vtex-api-apptoken': "{{VTEX-API-TOKEN}}",
'x-vtex-api-appkey': "{{VTEX-API-KEY}}"
}
response = requests.request("GET", url, headers=headers)
print(response.text)
My results are:
{"list":[{"orderId":"BWW-Lojas_Americanas-265033423001", ...
{"orderId":"BWW-Lojas_Americanas-265032819901","sequence":"506927","market
How can I split each list in many rows? After that, I want to save them in different files.txt
You should read the response as JSON using requests.request(...).json(). Then, you can process each order in the list.
Here's an example:
response = requests.request("GET", url, headers=headers).json()
for order in response['list']:
# Process each order
# ...

How to update a pull request through Github API

I want to update the title of a pull request and performing the below to achieve it :- (followed this doc https://developer.github.com/v3/pulls/#update-a-pull-request)
data = {"title": "New title"}
url='https://hostname/api/v3/repos/owner/repo/pulls/80'
token = 'my-token'
headers = {'Content-type': 'application/json', 'Accept': 'application/json', 'Authorization': 'token %s' % token}
resp = requests.patch(url, data=json.dumps(data), headers=headers)
print resp.json()
What am I missing ? Please help.
The following worked for me:
import requests
token = "my-token"
url = "https://api.github.com/repos/:owner/:repo/pulls/:number"
payload = {
"title": "New title"
}
r = requests.patch(url, auth=("username", token), json=payload)
print r.json()

Categories

Resources