Hot to split requests in many rows in python? - python

Code one
import requests
url = "http://store.place.com.br/api/oms/pvt/orders"
headers = {
'accept': "application/json",
'content-type': "application/json",
'x-vtex-api-apptoken': "{{VTEX-API-TOKEN}}",
'x-vtex-api-appkey': "{{VTEX-API-KEY}}"
}
response = requests.request("GET", url, headers=headers)
print(response.text)
Code two
url = "http://store.place.com.br/api/oms/pvt/orders/oderId"
headers = {
'accept': "application/json",
'content-type': "application/json",
'x-vtex-api-apptoken': "{{VTEX-API-TOKEN}}",
'x-vtex-api-appkey': "{{VTEX-API-KEY}}"
}
response = requests.request("GET", url, headers=headers)
print(response.text)
My results are:
{"list":[{"orderId":"BWW-Lojas_Americanas-265033423001", ...
{"orderId":"BWW-Lojas_Americanas-265032819901","sequence":"506927","market
How can I split each list in many rows? After that, I want to save them in different files.txt

You should read the response as JSON using requests.request(...).json(). Then, you can process each order in the list.
Here's an example:
response = requests.request("GET", url, headers=headers).json()
for order in response['list']:
# Process each order
# ...

Related

Python Requests Post within a nested Json - retrieve data with a specific value

I already look in stackoverflow and I could not find an answer to my problem.
I'm accessing an API from the German Government that has a output limit of 10.000 entries. I want all data from a specific city, and since there is more than 10.000 entries in the original database, I need to "do the query" while doing the requests.post.
Here is one entry of Json result, when I simply do request.post to this API:
{
"results":[
{
"_id":"CXPTYYFY807",
"CREATED_AT":"2019-12-17T14:48:17.130Z",
"UPDATED_AT":"2019-12-17T14:48:17.130Z",
"result":{
"id":"CXPTYYFY807",
"title":"Bundesstadt Bonn, SGB-315114, Ortsteilzentrum Brüser Berg, Fliesenarbeiten",
"description":["SGB-315114","Ortsteilzentrum Brüser Berg, Fliesenarbeiten"],
"procedure_type":"Ex ante Veröffentlichung (§ 19 Abs. 5)",
"order_type":"VOB",
"publication_date":"",
"cpv_codes":["45431000-7","45431100-8"],
"buyer":{
"name":"Bundesstadt Bonn, Referat Vergabedienste",
"address":"Berliner Platz 2",
"town":"Bonn",
"postal_code":"53111"},
"seller":{
"name":"",
"town":"",
"country":""
},
"geo":{
"lon":7.0944,
"lat":50.73657
},
"value":"",
"CREATED_AT":"2019-12-17T14:48:17.130Z",
"UPDATED_AT":"2019-12-17T14:48:17.130Z"}
}
],
"aggregations":{},
"pagination":{
"total":47389,
"start":0,
"end":0 }}
What I want is all the data which was bought in "town" : "Bonn"
What I already tryed:
import requests
url = 'https://daten.vergabe.nrw.de/rest/evergabe/aggregation_search'
headers = {'Accept': 'application/json', 'Content-Type': 'application/json'}
data = {"results": [{"result": {"buyer": {"town":"Bonn"}}}]}
#need to put the size limit, otherwise he delivers me less:
params = {'size': 10000}
req = requests.post(url, params=params, headers=headers, json=data)
This returns me the post, but not "filtered" by city.
I also tryed req = requests.post(url, params=params, headers=headers, data=data) , which returns me ERROR 400 .
Another way is to grab all the data with the pagination parameters on the end of the json code within a loop, but again I'm not being able to writwe down the json path to the pagination, for example : start: 0 , end:500
Can anyone help me solving it?
Try:
url = 'https://daten.vergabe.nrw.de/rest/evergabe/aggregation_search'
headers = {'Accept': 'application/json', 'Content-Type': 'application/json'}
query1 = {
"query": {
"match": {
"buyer.town": "Bonn"
}
}
}
req = requests.post(url, headers=headers, json=query1)
# Check the output
req.text
Edit:
This won't work if the filter matches with more than 10.000 results, but it may be a quick workaround to the problem you are facing.
import json
import requests
import math
url = "https://daten.vergabe.nrw.de/rest/vmp_rheinland"
size = 5000
payload = '{"sort":[{"_id":"asc"}],"query":{"match_all":{}},"size":'+str(size)+'}'
headers = {
'accept': "application/json",
'content-type': "application/json"
'cache-control': "no-cache"
}
response = requests.request("POST", url, data=payload, headers=headers)
tenders_array = []
query_data = json.loads(response.text)
tenders_array.extend(query_data['results'])
total_hits = query_data['pagination']['total']
result_size = len(query_data['results'])
last_id = query_data['results'][-1]["_id"]
number_of_loops = ((total_hits - size) // size )
last_loop_size = ((total_hits - size) % size)
for i in range(number_of_loops+1):
if i == number_of_loops:
size=last_loop_size
payload = '{"sort":[{"_id":"asc"}],"query":{"match_all":{}},"size":'+str(size)+',"search_after":["'+last_id+'"]}'
response = requests.request("POST", url, data=payload, headers=headers)
query_data = json.loads(response.text)
result_size = len(query_data['results'])
if result_size > 0:
tenders_array.extend(query_data['results'])
last_id = query_data['results'][-1]["_id"]
else:
break
https://gist.github.com/thiagoalencar/34401e204358499ea3b9aa043a18395f
code in the gist.
Some code to paginate through elasticsearch API. This is an API over the elasticsearch API, and the docs where not so clear. Tried scroll, no sucess. This solutions uses search_after parameter without point in time, because the endpoint is not available. Some times the servers refuses the request and it is necessary to verify with response.status_code==502.
The code is messy and need refactoring. But it works. The final tenders_array contains all objects.

How to add pagination to request in Python?

I want to have a python request and add pagination, but for some reason it does not work,
this is my request. Does someone know how to add 'paging': '{"page":0,"size":100}' corretly? The following isn't working?
url = self.get_url_for_endpoint(Constants.PATH_STATISTICS_CUSTOMERS)
payload = {}
params = {
'paging': '{"page":0,"size":100}'
}
headers = {
'Authorization': 'Bearer ' + self.access_token,
'Cookie': 'JSESSIONID=AB52DV8260C*****************',
'Content-Type': 'application/json',
}
r = requests.request("GET", url, headers=headers, data=payload, params=params)
This is working though via postman:
url + ?paging=%7B%22page%22:0,%22size%22:400%7D
So the endpoint has pagination!
I wouldn't recommend using an object as your paging value in the params, instead break down your page and size into individual parameters, like ?page=0&size=100.
But if you want to use an object, your params should be like this:
params = {
"paging": {"page":0,"size":100}
}

Looping through post request

I'm trying to create a loop in post request changing the environment of a application, but the script loops only on 'lab' environment, i'm using an restapi to send this requests and generate a different config files for each environment.
context="application-team"
clientToken="#option.clientToken#"
#Vars
vaultUrl="https://127.0.0.1:8200"
def createKvPath (vaultUrl):
for environment in ['lab', 'stg', 'prod']:
url = vaultUrl + '/v1/kv/'+context+'/application-name/'+environment+''
payload = {'none':'none'}
headers = {
'accept': 'application/json',
'Content-Type': 'application/json',
'X-Vault-Token': clientToken,
}
resp = requests.post(url, headers=headers, json=payload)
dataKv = resp.json()
vault = createKvPath(vaultUrl)
I solved this question removing data Kv = resp.json(), i don't know how resp.json() is locking the process.
def createKvPath (vaultUrl):
for environment in ['lab', 'stg', 'prod']:
url = vaultUrl + '/v1/kv/'+context+'/application-name/'+environment+''
payload = {'none':'none'}
headers = {
'accept': 'application/json',
'Content-Type': 'application/json',
'X-Vault-Token': clientToken,
}
resp = requests.post(url, headers=headers, json=payload)
vault = createKvPath(vaultUrl)

Looping through an array of API values for API GET request in Python

I have an array of ice cream flavors I want to iterate over for an API GET request. How do I loop through an array such as [vanilla, chocolate, strawberry] using the standard API request below?
import requests
url = "https://fakeurl.com/values/icecreamflavor/chocolate?"
payload = {}
headers = {
'Authorization': 'Bearer (STRING)',
'(STRING)': '(STRING)'
}
response = requests.request("GET", url, headers=headers, data = payload)
my_list = (response.text.encode('utf8'))
You could probably try string formatting on your url. You could loop through your array of ice-cream flavors, change the url in each loop and perform API GET request on the changed url.
import requests
iceCreamFlavors = ["vanilla", "chocolate", "strawberry"]
url = "https://fakeurl.com/values/icecreamflavor/{flavor}?"
payload = {}
headers = {
'Authorization': 'Bearer (STRING)',
'(STRING)': '(STRING)'
}
my_list = []
for flavor in iceCreamFlavors:
response = requests.request("GET", url.format(flavor=flavor), headers=headers, data = payload)
my_list.append(response.text.encode('utf8'))

Google calendar- get modified events with nextSyncToken via API rest

Can anyone please tell me how to write a cURL to get events (only modified) list with nextSyncToken?
This is my code that's not working:
def get_headers():
headers = {
#'Content-Type': "application/json",
'Authorization': access_token_json
}
return headers
def get_nexttokensync_list_event():
url_get_list_event = "https://www.googleapis.com/calendar/v3/calendars/id#gmail.com/events"
querystring = {"nextSyncToken": "CMCEh************jd4CGAU="}
response = requests.request("GET", url_get_list_event, headers=headers, params=querystring)
json_event_list_formatted = response.text
print(json_event_list_formatted)
Yes, i've done it!
Here is my code:
import requests
url = "https://www.googleapis.com/calendar/v3/calendars/here_calendar_id/events"
querystring = {"syncToken":"here_token"}
headers = {
'Content-Type': "application/json",
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)

Categories

Resources