I am trying to use the Microsoft Academic Graph API with Python, to get information about the Author's Affiliations. However, the information provided in
https://learn.microsoft.com/en-us/azure/cognitive-services/academic-knowledge/graphsearchmethod
is not clear to me.
I have also read Microsoft Academic Graph Search - retrieving all papers from a journal within a time-frame?
I am trying with something like this:
import requests
url = "https://westus.api.cognitive.microsoft.com/academic/v1.0/graph/search"
querystring = {"mode":"json%0A"}
payload = "{}"
response = requests.request("POST", url, data=payload, params=querystring)
print(response.text)
What should I put in "payload" to retrieve the affiliation of, for example, the Author "John Doe" ?
It seems you are using the wrong endpoint.
As for anything experimental the documentation seems out of date.
I have had success calling https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate
These endpoints can be seen in the cognitive labs documentation.
I am yet to figure out how to retrieve academic profiles, as the query below yields no results, whereas academic.microsoft.com has loads.
https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate?expr=Composite(AA.AuN='Harry L. Anderson')&model=latest&count=10&attributes=Id,Ti,AA.AuN,E,AA.AuId
Hope this may help anyone stumbling upon this.
Update :
Here is a working query for the same author :
https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate?model=latest&count=100&expr=Composite(AA.AuN=='harry l anderson')&attributes=Id,Ti,AA.AuN,E,AA.AuId
Notice the author name has to be in lowercase.
there's a tool for migrating the MAG into an Apache Elasticsearch ;)
https://github.com/vwoloszyn/mag2elasticsearch
Related
I am kind of stucked in trying to solve following issue: I try to access a web-page in order to get some data for a supplier (need to do it for work) in an automated way, using an api
The API is called https://wl-api.mf.gov.pl and shall provide information stored in json for supplier which can be found over their tax ID.
I use the request package and I am able to manage to get positive response:
import requests
nip=7393033097
response=requests.get("https://wl-api.mf.gov.pl")
print(response) --> Response [200]
If I click on the link and scroll until I find the specific part for the tax information, I find the following line
GET /api/search/nip/{nip}
So what I did is to add this line into my response variable, since this is how I understood it - and there is the point where I think I am wrong
response=requests.get("https://wl-api.mf.gov.pl/search/7393033097/{7393033097}")
However, I cannot access it.
Am I doing something wrong - I do believe yes - and can anyone give me a little help :)
Update: If I check the requirements / documentation I find following information where I need a bit support to implement it
GET /api/search/nip/{nip}
(nip?date)
Single entity search by nip
**Path parameters**
nip (required)
*Path Parameter — Nip*
**Query parameters**
date (required)
*Query Parameter — format: date*
**Return type**
EntityResponse
Example data
Content-Type: application/json
I think this line:https://wl-api.mf.gov.pl/search/7393033097/{7393033097} should be like this:
https://wl-api.mf.gov.pl/search/nip/7393033097
I am trying to access historical weather data from an API. I obtained the API key from here: https://www.visualcrossing.com/weather/weather-data-services#/timeline
I am trying this but I keep getting an Error 404. I am not sure if this is because of a problem with the API or my code.
import requests
r = requests.get("https://weather.visualcrossing.com/VisualCrossingWebServices/rest/services/timeline/London,UK/2021-01-01/2021-03-28?key=AXSSS")
print(r)
Documentation: https://www.visualcrossing.com/resources/documentation/weather-api/timeline-weather-api/
How can I obtain the data?
i tested the site you gave and created a account and api key to get London whether data, you can use it too
Code :
import requests
r = requests.get("https://weather.visualcrossing.com/VisualCrossingWebServices/rest/services/timeline/London?unitGroup=metric&key=PPKBBJ7637X5SNDUG6HZA23X7")
print(r)
Output :
<Response [200]>
now you can access data by json() method too:
print(r.json())
the output is so Huge, but your problem is 2 things:
1-API key is not correct (i tested)
2-You should buy premium plan
for get a range of dates url will be like this:
https://weather.visualcrossing.com/VisualCrossingWebServices/rest/services/timeline/London/2021-1-1/2021-1-5?unitGroup=us&key=PPKBBJ7637X5SNDUG6HZA23X7
and the date range you give , has too much row per request, you should buy premium plan
Otherwise you will get this error in their own website:
Your plan allows up to 100 rows per request. This query will return (yyy) rows. Please smaller date range or fewer locations.
Our paid plans offer increased query limits
If you don`t want to pay for paid plan you can use this link to use github public apis, a lot of free and without api key you can use for it
Github public apis
I'd like to know how can I simply get the title or other information about a video using Youtube API, in case the only thing I know is the url of the video (so basically the video ID).
What other info can I get about a video? eg: Length, Category, Uploader name, Country of origin, ... ???
Can somebody provide me a usable code snippet and the library to use for this data collecting?
Thanks for the help in advance.
There are plenty of examples and documentation about what you can get using the YouTube API's Python bindings.
Here are Python code samples as provided by YouTube:
https://developers.google.com/youtube/v3/code_samples/python#create_and_manage_youtube_video_caption_tracks
And the code samples can also be downloaded from their GitHub repository:
https://github.com/youtube/api-samples/tree/master/python
Try below code:
payload = {'id': search_result["id"]["videoId"], 'part': 'contentDetails,statistics,snippet', 'key': DEVELOPER_KEY}
l = requests.Session().get('https://www.googleapis.com/youtube/v3/videos', params=payload)
resp_dict = json.loads(l.content)
print "Title: ",resp_dict['items'][0]['snippet']['title']
Try using this API: https://pypi.org/project/python-youtube/
To grab the title, you can do something like this:
from pyyoutube import Api
playlistVideoItems = api.get_playlist_items(playlist_id='PLOU2XLYxmsIKpaV8h0AGE05so0fAwwfTw').items
print(playlistVideoItems[0].snippet.title)
Note that the above is somewhat untested code. I copied the relevant bits and pieces from what I currently have, but I did not test this exact set lines of code. And of course, I'm not actually using that playlist ID for my purposes.
In regards to what other types of information can be gathered, I would recommend reading the documentation or running in a debugger. As of this writing, I am trying to figure out how to obtain the video uploader name, but I don't really need this data although it would be nice to have. I will update this answer if I figure it out.
I am very new to the Graph API and trying to write a simple python script that first identifies all pages that a user has liked and all groups that he/she is a part of. To do this, I used the following:
To get the groups he has joined:
API: /{user-id}/groups
Permissions req: user_groups
To get the pages he has liked:
API: /{user-id}/likes
Permissions req: user_likes
and
url='https://graph.facebook.com/'+userId+'/likes?access_token='+accessToken +'&limit='+str(limit)
Now that I can see the id's of the groups in the JSON output, I want to hit them one by one and fetch all content (posts, comments, photos etc.) posted within that group. Is this possible and if yes, how can I do it? What API calls do I have to make?
That's quite a broad question, before asking here you should have give a try searching on SO.
Anyways, I'll tell you broadly how can you do it.
First of all go through the official documentation of Graph API: Graph API Reference.
You'll find each and every API which can be used to fetch the data. For example: /group, /page. You'll get to know what kind of access token with what permissions are required for an API call.
Here are some API calls useful to you-
to fetch the group/page's posts- /{group-id/page-id}/posts
to fetch the comments of a post- {post-id}/comments
to fetch the group/page's photos- /{group-id/page-id}/photos
and so on. Once you'll go through the documentation and test some API calls, the things would be much clear. It's quite easy!
Hope it helps. Good luck!
Here's an example using facepy:
from facepy import GraphAPI
import json
graph = GraphAPI(APP_TOKEN)
groupIDs = ("[id here]","[etc]")
outfile_name ="teacher-groups-summary-export-data.csv"
f = csv.writer(open(outfile_name, "wb+"))
for gID in groupIDs:
groupData = graph.get(gID + "/feed", page=True, retry=3, limit=500)
for data in groupData:
json_data=json.dumps(data, indent = 4,cls=DecimalEncoder)
decoded_response = json_data.decode("UTF-8")
data = json.loads(decoded_response)
print "Paging group data..."
for item in data["data"]:
...etc, dealing with items...
Check the API reference. You should use feed.
You can use /{group-id}/feed to get an array of Post objects of the group. Remember to include a user access token for a member of the group.
I am working on writing a keyword extractor in Python. I would like to use the Yahoo Content API. The question is, is there a Python2.7 (or even 3.x) wrapper for the Yahoo Content API? I could not find one doing normal searches.
In parallel, I am trying alchemyAPI, OpenCalais, DBPedia Spotlight. I would love to make a comparison to figure out which one to use in production.
Any guidance would be most appreciated.
Thanks
I was interested in the answer as well. This is a possible solution:
import requests
text = """
Italian sculptors and painters of the renaissance favored the Virgin Mary for inspiration
"""
payload = {'q': "select * from contentanalysis.analyze where text='{text}'".format(text=text)}
r = requests.post("http://query.yahooapis.com/v1/public/yql", data=payload)
print(r.text)
According to the documentation, you can POST requests to the Yahoo content API and get JSON back. Python has the urllib2, requests and json libraries for that, all of which are well-documented and easy to use.