Youtube API get the current search term? - python

I am trying to get a JSON List of Videos. My Problem is that I can´t get the current Searchresult. If I go on youtube I get different Searchresult as if I run my python Script. I ask because I recognize that there is no such term handled in Stackoverflow.
Code:
def getVideo():
parameters = {"part": "snippet",
"maxResults": 5,
"order": "relevance",
"pageToken": "",
"publishedAfter": None,
"publishedBefore": None,
"q": "",
"key": api_key,
"type": "video",
}
url = "https://www.googleapis.com/youtube/v3/search"
parameters["q"] = "Shroud"
page = requests.request(method="get", url=url, params=parameters)
j_results = json.loads(page.text)
print(page.text)
#print(j_results)
getVideo()
I have some thoughts. I think its because of the variables publishedAfter and publishedbefore but I dont know how I can fix it.
Best Regards
Cren1993

Search: list
Returns a collection of search results that match the query parameters specified in the API request. By default, a search result set identifies matching video, channel, and playlist resources, but you can also configure queries to only retrieve a specific type of resource.
No were in there does it mention that the search results for the YouTube api will return the exact same results as the YouTube Website. They are different systems. The API is for use by third party developers like yourself. The YouTube website is controlled by Google and probably contained extra search features that google has not exposed to third party developers either because they cant or they dont want to.

Related

how to update multiple playlist items position at once using youtube api?

I like to sort and update youtube playlist items by my own criteria but the problem is that I couldn't get my head around how to update the whole playlist items position at once.
As request body required data in the following format:
body = {
"snippet": {
"playlistId": "PLyR_eqaLz2hmBPeDYO3pyXaqexCIV-PGp",
"resourceId": {
"kind": "youtube#video",
"videoId": "fc6ZQX46Kz8"
},
"position": 3
},
"id": "UEx5Ul9lcWFMejJobUJQZURZTzNweVhhcWV4Q0lWLVBHcC41MjE1MkI0OTQ2QzJGNzNG"
}
The problem with that format is that I can only request to updated single video in each request. A single request cost around 50 units of quota and is also slow. So, not suitable to apply this to a fairly large playlist.
Can anyone please help on how can I request multiple items in a single body or if anyone has already updated multiple items in a single body request please do share your code? Whether it is for youtube API or for any other app.
Thanks
If you check the documentation you will notice that PlaylistItems: update
Modifies a playlist item. For example, you could update the item's position in the playlist.
It updates a singular playlist item. You can not update more then one.
Most of the Google apis do support batching but this is not going to help you as batching still uses the same amount of quota as sending the request singular. I cant find any documentation though that says that the YouTube Data api even supports batching.
Your best bet is to go with what you are doing now and request additional quota if you need it.
It is not possible to update multiple items in the playlist at once. If you would notice, its behavior follows the insertion sort algorithm where inserting an item would cause all others to move, depending on where the item will be inserted. The same behavior applies to update and delete.
For example, if you insert an item in the first position, all items in the playlist will move backwards.

How to use Youtube API v3 to filter out videos by category?

I'm trying to build a Youtube query in python that filters out, for instance, category 39 - Horror.
I'm looking to something similar to this:
searchResults = YouTube.Search.list('id,snippet', {
maxResults: "10",
q: "The ring",
type: "video",
videoCategoryId: "-10"
});
Or any kind of workaround.
I even contemplated searching without filter and then try to exclude the ones from that category, but the information provided by the query does not include the category of the video. And it's pretty ugly as I really want 10 results, and I may need to make more queries in order to fill the list.
All I've found googling is how to get videos from the specified category.
Can you help me on how to achieve this?
By using the YouTube Data API, you can check here the existing videoCategories.
In this demo available in the Google API Explorer, you can retrieve the videoCategories.
Once you have selected the videoCategoryId, use the following search request:
https://www.googleapis.com/youtube/v3/search?part=snippet&maxResults=10&q=The+ring&regionCode=US&type=video&videoCategoryId=1&fields=items(id(channelId%2CvideoId)%2Csnippet(channelId%2CchannelTitle%2Cdescription%2Ctitle))%2CnextPageToken%2CpageInfo%2CprevPageToken%2CregionCode&key={YOUR_API_KEY}
Here, I'm using the search.list request for get videos "from the US region", with videoCategoryId 1 = (Film & Animation) and retrieving the title of the YouTube video, its videoId, the channel title and channel_id from the channel who uploaded the video.
Here is the demo.
It seems that the videoCategoryId is not supported in all regions (this might be the reason I couldn't retrieve videos with videoCategoryId=39 "which is the Horror category in the US").
For more information, read this answer.

Get Information from Microsoft Academic Graph API

I am trying to use the Microsoft Academic Graph API with Python, to get information about the Author's Affiliations. However, the information provided in
https://learn.microsoft.com/en-us/azure/cognitive-services/academic-knowledge/graphsearchmethod
is not clear to me.
I have also read Microsoft Academic Graph Search - retrieving all papers from a journal within a time-frame?
I am trying with something like this:
import requests
url = "https://westus.api.cognitive.microsoft.com/academic/v1.0/graph/search"
querystring = {"mode":"json%0A"}
payload = "{}"
response = requests.request("POST", url, data=payload, params=querystring)
print(response.text)
What should I put in "payload" to retrieve the affiliation of, for example, the Author "John Doe" ?
It seems you are using the wrong endpoint.
As for anything experimental the documentation seems out of date.
I have had success calling https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate
These endpoints can be seen in the cognitive labs documentation.
I am yet to figure out how to retrieve academic profiles, as the query below yields no results, whereas academic.microsoft.com has loads.
https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate?expr=Composite(AA.AuN='Harry L. Anderson')&model=latest&count=10&attributes=Id,Ti,AA.AuN,E,AA.AuId
Hope this may help anyone stumbling upon this.
Update :
Here is a working query for the same author :
https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate?model=latest&count=100&expr=Composite(AA.AuN=='harry l anderson')&attributes=Id,Ti,AA.AuN,E,AA.AuId
Notice the author name has to be in lowercase.
there's a tool for migrating the MAG into an Apache Elasticsearch ;)
https://github.com/vwoloszyn/mag2elasticsearch

Facebook Graph API: Cannot pass categories array on Python

I'm trying to use Facebook's Graph API to get all the restaurants within a certain location. I'm doing this on Python and my url is
url ="https://graph.facebook.com/search?type=place&center=14.6091,121.0223,50000&categories=[\"FOOD_BEVERAGE\"]&fields=name,location,category,fan_count,rating_count,overall_star_rating&limit=100&access_token=" + access
However, this is the error message I get.
{
"error": {
"message": "(#100) For field 'placesearch': param categories must be an array.",
"code": 100,
"type": "OAuthException",
"fbtrace_id": "EGQ8YdwnzUT"
}
}
But when I paste the URL on the Graph explorer (linked below), it works. I can't do this exhaustively on the explorer because I need to collect restaurant data from all the next pages. Can someone help me explain why this is happening and how to fix it so that I can access it through Python?
Example Link
You did not specify an API version, so this will fall back to the lowest version your app can use.
I can reproduce the error for API versions <= v2.8 in Graph API Explorer - for v2.9 and above it works.
So use https://graph.facebook.com/v2.9/search?… (at least, or a higher version, up to you.)

Listing extractors from import.io

I would like to know how to get the crawling data (list of URLs manually input through the GUI) from my import.io extractors.
The API documentation is very scarce and it does not specify if the GET requests I make actually start a crawler (and consume one of my crawler available runs) or just query the result of manually launched crawlers.
Also I would like to know how to obtain the connector ID, as I understand, an extractor is nothing more than a specialized connector, but when I use the extractor_id as the connector id for querying the API, I get the connector does not exist.
A way I thought I could have listed the URLs I have in one off my extractors is this:
https://api.import.io/store/connector/_search?
_sortDirection=DESC&_default_operator=OR&_mine=true&_apikey=123...
But the only result I get is:
{ "took": 2, "timed_out": false, "hits": {
"total": 0,
"hits": [],
"max_score": 0 } }
Nevertheless, even if I would get a more complete response, the example result I see in the documentation ddoes not mention any kind of list or element containing the URLs I'm trying to get from my import.io account.
I am using python to create this API
The legacy API will not work for any non-legacy connectors, so you will have to use the new Web Extractor API. Unfortunately, there is no documentation for this.
Luckily, with some snooping you can find the following call to list connectors connected to your apikey:
https://store.import.io/store/extractor/_search?_apikey=YOUR_API_KEY
From here, You check each hit and verify the _type property is set to EXTRACTOR. This will give you access to, among other things, the GUID associated with the extractor and the name you chose for it when you created it.
You can then do the following to download the latest run from the extractor in CSV format:
https://data.import.io/extractor/{{GUID}}/csv/latest?_apikey=YOUR_API_KEY
This was found in the Integrations tab of every Web Extractor. There are other queries there as well.
Hope this helps.

Categories

Resources