I'm making a request in restapi to show application version, but the output i got is not the expected, i want to format this data.
from requests.api import request
from requests.packages.urllib3.exceptions import InsecureRequestWarning
import re
import requests
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
bipTmb='https://api1-foo'
bipAws='https://api2-foo'
def requestGet(bipTmb, bipAws):
cont = []
for urls in [bipTmb, bipAws]:
url = urls + '/mgmt/tm/sys?$top=4'
headers = {
'accept': '*/*',
'Content-Type': 'application/json',
}
response = requests.get(url, headers=headers, verify=False, auth=('auth', 'pass'))
data = response.json()
data = data['items']
reference = data[0]
version = reference['reference']
find = re.search("ver=.*",format(version))
content = (urls, find.group())
cont.append(content)
return cont
cont = requestGet(bipTmb, bipAws)
for item in cont:
treated_data = (item)
print(treated_data)
output:
[('https://api1-foo', "ver=13.1.3.6'}"), ('https://api2-foo', "ver=13.1.3.6'}")]
Formated output expected:
https://api1-foo ver=13.1.3.6,
https://api2-foo ver=13.1.3.6
How can i transform this data?
Try this.
print "\n".join(treated_data)
I reach the expected output looping through data.
for i in range(len(cont)):
print(cont[i])
Related
I've a dataframe as below -
type
url
Cov
link1.ndjson
Cov
link2.ndjson
EOB
link1.ndjson
Patient
link1.ndjson
There are N number of rows with links for three types of files. Now, as a new-bie in Python, I use the type of file and it's link one by one to download the files.
I can download the files now manually using code
import requests
url='https://fakesite.com/4472/link1.ndjson' #the link is from first row of above dataframe
headers = {
'Authorization': 'Bearer %s' %access_token,
'Accept-Encoding': 'gzip',
}
response = requests.get(url, headers=headers)
print(response.status_code)
import sys
original_stdout = sys.stdout # Save a reference to the original standard output
with open('Cov_link1.ndjson', 'w') as f:
#file is concatenation of first row of dataframe type+_+url
sys.stdout = f # Change the standard output to the file we created.
print(response.text)
sys.stdout = original_stdout # Reset the standard output to its original value
The request is to download files i.e all the N number of rows in dataframe. Can someone please help?
Kindly check if this serves your purpose:
import pandas as pd
import sys
df = pd.DataFrame({'type':['Cov','Cov','EOB','Patient'], 'url':['link1.ndjson','link2.ndjson','link1.ndjson','link1.ndjson']})
df['name'] = df['type'] + '_' + df['url']
url_list = list(df['url'])
name_list = list(df['name'])
headers = {
'Authorization': 'Bearer %s' %access_token,
'Accept-Encoding': 'gzip',
}
result = list(zip(url_list,name_list))
for i,j in result:
url_list ='url' + i
response = requests.get(url, headers=headers)
print(response.status_code)
original_stdout = sys.stdout
with open(j, 'w') as f:
sys.stdout = f
print(response.text)
sys.stdout = original_stdout
The code behaving strangely that I am unable to understand what's going on..
The code that works fine:
link = "https://api.luminati.io/dca/trigger_immediate?collector=XXXxxxXXXxxxXXXX"
head = {"Authorization": "Bearer xxXXxxXXxx" ,"Content-Type": "application/json"}
data = '{"url":"https://www.practo.com/pune/doctor/XXXXXXxXXXXX"}'
res = requests.post(link, headers = head, data = data)
print("Status: "+str(res.status_code), "Message: "+res.text)
Output:
Status: 202 Message: {"response_id":"z7627t1617552745375r14623bt37oo"}
But I want to load "url":"https://www.practo.com/pune/doctor/XXXXXXxXXXXX" this thing dynamically.
url = "https://www.practo.com/pune/doctor/XXXXXXxXXXXX"
link = "https://api.luminati.io/dca/trigger_immediate?collector=XXXxxxXXXxxxXXXX"
head = {"Authorization": "Bearer xxXXxxXXxx" ,"Content-Type": "application/json"}
data = {"url":url}
res = requests.post(link, headers = head, data = data)
print("Status: "+str(res.status_code), "Message: "+res.text)
Output:
Status: 500 Message: 'Unexpected token u in JSON at position 7'
To load data dynamically try using %s string feature, like that:
url = "https://www.practo.com/pune/doctor/XXXXXXxXXXXX"
data = '{"url":"%s"}' % url
or you can convert dictionary entirely to str, like:
import json
data = {"url":link}
res = requests.post(link, headers=head, data=json.dumps(data))
by the way, you can pass body not like data, but like json, here's documents:
:param json: (optional) json data to send in the body of the :class:Request. So your request will look like:
data = {"url":link}
res = requests.post(link, headers=head, json=data)
any_dynamic_variable='XXXXXXxXXXXX'#You can make your websit or part of the url to be dynamic
url = f'https://www.practo.com/pune/doctor/{any_dynamic}'
headers = {"Authorization": "Bearer xxXXxxXXxx" ,"Content-Type": "application/json"}
payload={
'your_key':'your_value',
'currancy':'CA',#<==== This is an example
}
r = requests.post(url,headers=headres,data=payload}
print(r.status_code)#check your status to be able to find the error if you have any
print(r.text)
This is part of my code. It successfully makes an API call and receives the data from the API endpoint. I am trying to save this JSON data into a csv file, but I am not sure how. Also the API data is printed as unicode instead of a string - how do I fix that?
I have tried these lines of code:
Trial 1:
with open('data.json', 'w', encoding = 'utf-8') as file:
json.dump(response, file, ensure_ascii=False, indent=4)
Trial 2:
data = response.text
file_csv = open("File.csv", "w")
writer = csv.writer(file_csv, delimiter = ' ')
for rows in basketball_data.split('\n'):
writer.writerow(rows)
Trial 3:
I tried using Pandas, but that didn't work as well. Any recommendations?
The code below is for fetching the API data which works.
basketball_data = " "
URL = "https://api.sportsdata.io/v3/nba/stats/json/PlayerGameStatsByDate/2020-FEB7" #API
# endpoint
# Dictionary to map HTTP authenticator and API key
Headers = {'Ocp-Apim-Subscription-Key': 'd22e84f5c1fa4f4ab47bf1419bd94221', 'accept':
"application/json", 'accept': "text/csv"}
response = requests.get(url = URL , headers = Headers) #get request parameters to
print(response.status_code) #Status code tells us if API call is successful
print(response.json()) #JSON object is returned
Try this:
import pandas as pd
import requests
url = "https://api.sportsdata.io/v3/nba/stats/json/PlayerGameStatsByDate/2020-FEB7"
headers = {
"Ocp-ApimKey": ";wld4221",
"accept": "application/json",
}
data = requests.get(url=url, headers=headers).json()
pd.DataFrame(data).to_csv("basketball_data.csv", index=False)
Output:
Here is one of the solutions given by the pandas documentation:
import requests
import pandas as pd
basketball_data = " "
URL = "https://api.sportsdata.io/v3/nba/stats/json/PlayerGameStatsByDate/2020-FEB7" #API
Headers = {'Ocp-Apim-Subscription-Key': 'd22e84f5c1fa4f4ab47bf1419bd94221', 'accept':
"application/json", 'accept': "text/csv"}
response = requests.get(url = URL , headers = Headers) #get request parameters to
print(response.status_code) #Status code tells us if API call is successful
print(response.json()) #JSON object is returned
df = pd.json_normalize(response.json())
df.to_csv("data1.csv")
I'm trying to send json as a parameter thru a get method for an api, I found that the url to which it is hitting is little bit different from the original url. Some ":%20" text is inserted in between the url. Not sure why this difference is coming, Can someone help
Original URL: http://258.198.39.215:8280/areas/0.1/get/raj/name?jsonRequest=%7B%22rajNames%22%3A%5B%22WAR%22%5D%7D
My URL : http://258.198.39.215:8280/areas/0.1/get/raj/name?jsonRequest=&%7B%22rajNames%22:%20%22WAR%22%7D
Python code:
headers = {'Accept': 'application/json','Authorization': 'Bearer '+access_token}
json = {'rajNames':'WAR'}
url = 'http://258.198.39.215:8280/areas/0.1/get/raj/name?jsonRequest='
r = requests.get(url, params=json.dumps(json),headers=headers)
print _r.url
The spaces are not the problem; your method of generating the query string is, as is your actual JSON payload.
Note that your original URL has a different JSON structure:
>>> from urllib import unquote
>>> unquote('%7B%22rajNames%22%3A%5B%22WAR%22%5D%7D')
'{"rajNames":["WAR"]}'
The rajNames parameter is a list, not a single string.
Next, requests sees all data in params as a new parameter, so it used & to delimit from the previous parameter. Use a dictionary and leave the ?jsonRequest= part to requests to generate:
headers = {'Accept': 'application/json', 'Authorization': 'Bearer '+access_token}
json_data = {'rajNames': ['WAR']}
params = {'jsonRequest': json.dumps(json_data)}
url = 'http://258.198.39.215:8280/areas/0.1/get/raj/name'
r = requests.get(url, params=params, headers=headers)
print _r.url
Demo:
>>> import requests
>>> import json
>>> headers = {'Accept': 'application/json', 'Authorization': 'Bearer <access_token>'}
>>> json_data = {'rajNames': ['WAR']}
>>> params = {'jsonRequest': json.dumps(json_data)}
>>> url = 'http://258.198.39.215:8280/areas/0.1/get/raj/name'
>>> requests.Request('GET', url, params=params, headers=headers).prepare().url
'http://258.198.39.215:8280/areas/0.1/get/raj/name?jsonRequest=%7B%22rajNames%22%3A+%5B%22WAR%22%5D%7D'
You can still eliminate the spaces used in the JSON output from json.dumps() by setting the separators argument to (',', ':'):
>>> json.dumps(json_data)
'{"rajNames": ["WAR"]}'
>>> json.dumps(json_data, separators=(',', ':'))
'{"rajNames":["WAR"]}'
but I doubt that is really needed.
I am requesting an Ajax Web site with a Python script and fetching cities and branch offices of http://www.yurticikargo.com/bilgi-servisleri/Sayfalar/en-yakin-sube.aspx
I completed the first step with posting
{cityID: 34} to this url and fetc the JSON output.
http://www.yurticikargo.com/_layouts/ArikanliHolding.YurticiKargo.WebSite/ajaxproxy-sswservices.aspx/GetTownByCity
But I can not retrive the JSON output with Python although i get succesfully with Chrome Advanced Rest Client Extension, posting {cityID:54,townID:5416,unitOnDutyFlag:null,closestFlag:2}
http://www.yurticikargo.com/_layouts/ArikanliHolding.YurticiKargo.WebSite/ajaxproxy-unitservices.aspx/GetUnit
All of the source code is here
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
import json
class Yurtici(object):
baseUrl = 'http://www.yurticikargo.com/'
ajaxRoot = '_layouts/ArikanliHolding.YurticiKargo.WebSite/ajaxproxy-sswservices.aspx/'
getTown = 'GetTownByCity'
getUnit = 'GetUnit'
urlGetTown = baseUrl + ajaxRoot + getTown
urlGetUnit = baseUrl + ajaxRoot + getUnit
headers = {'content-type': 'application/json','encoding':'utf-8'}
def __init__(self):
pass
def ilceler(self, plaka=34): # Default testing value
payload = {'cityId':plaka}
url = self.urlGetTown
r = requests.post(url, data=json.dumps(payload), headers=self.headers)
return r.json() # OK
def subeler(self, ilceNo=5902): # Default testing value
# 5902 Çerkezköy
payload= {'cityID':59,'townID':5902,'unitOnDutyFlag':'null','closestFlag':0}
url = self.urlGetUnit
headers = {'content-type': 'application/json','encoding':'utf-8'}
r = requests.post(url, data=json.dumps(payload), headers=headers)
print r.status_code, r.raw.read()
if __name__ == '__main__':
a = Yurtici()
print a.ilceler(37) # OK
print a.subeler() # NOT OK !!!
Your code isn't posting to the same url you're using in your text example.
Let's walk through this backwards. First, let's look at the failing POST.
url = self.urlGetUnit
headers = {'content-type': 'application/json','encoding':'utf-8'}
r = requests.post(url, data=json.dumps(payload), headers=headers)
So we're posting to a URL that is equal to self.urlGetUnit. Ok, let's look at how that's defined:
baseUrl = 'http://www.yurticikargo.com/'
ajaxRoot = '_layouts/ArikanliHolding.YurticiKargo.WebSite/ajaxproxy-sswservices.aspx/'
getUnit = 'GetUnit'
urlGetUnit = baseUrl + ajaxRoot + getUnit
If you do the work in urlGetUnit, you get that the URL will be http://www.yurticikargo.com/_layouts/ArikanliHolding.YurticiKargo.WebSite/ajaxproxy-sswservices.aspx/GetUnit. Let's put this alongside the URL you used in Chrome to compare the differences:
http://www.yurticikargo.com/_layouts/ArikanliHolding.YurticiKargo.WebSite/ajaxproxy-sswservices.aspx/GetUnit
http://www.yurticikargo.com/_layouts/ArikanliHolding.YurticiKargo.WebSite/ajaxproxy-unitservices.aspx/GetUnit
See the difference? ajaxRoot is not the same for both URLs. Sort that out and you'll get back a JSON response.