I'm using https://developers.looker.com/api/explorer/4.0/methods/Query/query
Not entirely sure how some looks get a complete/incomplete response from the API, does anyone have experience working with looker API? want to understand what possible reasons could have led to this behaviour
incomplete response for look_id_1 : {"id":"1"}
complete response for look_id_2 : {"id":"2","view":"x","fields":["a"]}
Note - https://developers.looker.com/api/explorer/4.0/methods/Look/search_looks for the above looks retrieves complete information. I need the query response for the looks.
Related
I am kind of stucked in trying to solve following issue: I try to access a web-page in order to get some data for a supplier (need to do it for work) in an automated way, using an api
The API is called https://wl-api.mf.gov.pl and shall provide information stored in json for supplier which can be found over their tax ID.
I use the request package and I am able to manage to get positive response:
import requests
nip=7393033097
response=requests.get("https://wl-api.mf.gov.pl")
print(response) --> Response [200]
If I click on the link and scroll until I find the specific part for the tax information, I find the following line
GET /api/search/nip/{nip}
So what I did is to add this line into my response variable, since this is how I understood it - and there is the point where I think I am wrong
response=requests.get("https://wl-api.mf.gov.pl/search/7393033097/{7393033097}")
However, I cannot access it.
Am I doing something wrong - I do believe yes - and can anyone give me a little help :)
Update: If I check the requirements / documentation I find following information where I need a bit support to implement it
GET /api/search/nip/{nip}
(nip?date)
Single entity search by nip
**Path parameters**
nip (required)
*Path Parameter — Nip*
**Query parameters**
date (required)
*Query Parameter — format: date*
**Return type**
EntityResponse
Example data
Content-Type: application/json
I think this line:https://wl-api.mf.gov.pl/search/7393033097/{7393033097} should be like this:
https://wl-api.mf.gov.pl/search/nip/7393033097
HTTP request mentioned in Microsoft Graph API's documentation found at this link
GET /reports/getMailboxUsageDetail(period='{period_value}')
I cannot understand how to incorporate the data mentioned within the round parenthesis
(period='{period_value}')
I tried adding this to query parameters, but it didn't work.
URL="https://graph.microsoft.com/beta/reports/getMailboxUsageDetail"
queryParams={"period":"D7"}
requests.get(URI, params=queryParams)
But, it didn't work.
It's actually simpler than you would think.
You just use the period parameter shown in the round brackets in the URL directly as shown in the documentation.
So, if you want to get the same report you're trying as shown in HTTP format:
GET /reports/getMailboxUsageDetail(period='{period_value}')
You will use the URL as:
reportsURI="https://graph.microsoft.com/beta/reports/getMailboxUsageDetail(period='D7')"
requests.get(reportsURI, headers=authHeaders)
This will give you a report in CSV format.
If you want in JSON format, you can use query parameters to mention format
formatParams = {"format":"application/json"}
requests.get(reportsURI, headers=auth, params=formatParams)
This will give you JSON report.
I am trying to use the Microsoft Academic Graph API with Python, to get information about the Author's Affiliations. However, the information provided in
https://learn.microsoft.com/en-us/azure/cognitive-services/academic-knowledge/graphsearchmethod
is not clear to me.
I have also read Microsoft Academic Graph Search - retrieving all papers from a journal within a time-frame?
I am trying with something like this:
import requests
url = "https://westus.api.cognitive.microsoft.com/academic/v1.0/graph/search"
querystring = {"mode":"json%0A"}
payload = "{}"
response = requests.request("POST", url, data=payload, params=querystring)
print(response.text)
What should I put in "payload" to retrieve the affiliation of, for example, the Author "John Doe" ?
It seems you are using the wrong endpoint.
As for anything experimental the documentation seems out of date.
I have had success calling https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate
These endpoints can be seen in the cognitive labs documentation.
I am yet to figure out how to retrieve academic profiles, as the query below yields no results, whereas academic.microsoft.com has loads.
https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate?expr=Composite(AA.AuN='Harry L. Anderson')&model=latest&count=10&attributes=Id,Ti,AA.AuN,E,AA.AuId
Hope this may help anyone stumbling upon this.
Update :
Here is a working query for the same author :
https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate?model=latest&count=100&expr=Composite(AA.AuN=='harry l anderson')&attributes=Id,Ti,AA.AuN,E,AA.AuId
Notice the author name has to be in lowercase.
there's a tool for migrating the MAG into an Apache Elasticsearch ;)
https://github.com/vwoloszyn/mag2elasticsearch
This is pretty much a generic question and just would like someone to point me the right direction. I understand the existence of an API in Alchemy as documented in :http://www.alchemyapi.com/api/image-tagging/urls.html?__hstc=27013730.9b40428638d250ef6490e695d551207d.1458459083806.1458459083806.1458459083806.1&__hssc=27013730.1.1458459083807&__hsfp=2749035364
I have managed to call the API but unable to make any sense out of the data. My code thus far is as below :
import requests
apikey = "01405e7492ca333c6ab3a5c7da544e9f11bf6e26"
picture = open('IMG_1172.JPG','rb').read()
url="http://gateway-a.watsonplatform.net/calls/image/ImageGetRankedImageKeywords?apikey=01405e7492ca333c6ab3a5c7da544e9f11bf6e26&outputMode=json&forceShowAll=1"
r = requests.post(url = url, data = picture)
print r.txt
The above r.txt was unable to be executed as it shows Response 200. I was suppose to receive a json file back from my API call but I don't see it here.
Appreciate if anyone could guide me on this. Thanks !
I am coding a Python2 script to perform some automatic actions in a website. I'm using urllib/urllib2 to accomplish this task. It involves GET and POST requests, custom headers, etc.
I stumbled upon an issue which seems to be not mentioned in the documentation. Let's pretend we have the following valid url: https://stackoverflow.com/index.php?abc=def&fgh=jkl and we need to perform a POST request there.
How my code looks like (please ignore if you find any typo errors):
data = urllib.urlencode({ "data": "somedata", "moredata": "somemoredata" })
urllib2.urlopen(urllib2.Request("https://stackoverflow.com/index.php?abc=def&fgh=jkl", data))
No errors are shown, but according to the web server, the petition is being received to "https://stackoverflow.com/index.php" and not to "https://stackoverflow.com/index.php?abc=def&fgh=jkl". What is the problem here?
I know that I could use Requests, but I'd like to use urllib/urllib2 first.
If I'm not wrong, you should pass your request data in data dictionary you passed to the url open() function.
data = urllib.urlencode({'abc': 'def', 'fgh': 'jkl'})
urllib2.urlopen(urllib2.Request('http://stackoverflow.com/index.php'))
Also, just like you said, use Requests unless you absolutely need the low level access urllib provides.
Hope this helps.