I am using the following endpoint to retrieve posts in the page
https://api.linkedin.com/v2/shares?q=owners&owners=urn:li:organization:xxxxx
Even though there are 235 posts in total, getting empty elements array
{
"elements": [],
"paging": {
"total": 235,
"count": 10,
"start": 0,
"links": [
{
"rel": "next",
"href": "/v2/shares?count=10&owners=urn:li:organization:XXXXXXX&q=owners&start=0",
"type": "application/json"
}
]
}
}
And I added X-Restli-Protocol-Version : 2.0.0
And i have the Following scopes in the LinkedIn API access token
r_organization_social
r_1st_connections_size
r_emailaddress
r_ads_reporting
rw_organization_admin
r_liteprofile
r_basicprofile
rw_ads
r_ads
w_member_social
w_organization_social
I don't know where it is going wrong...
I found this issue here
Linkedin share/ugc post api is not providing posts
but it only says Using the right credentials worked
i don't quite understand what he meant by that
Problem Solved
All I needed was Admin access to the organization page...
Related
I'm on exploration Dataplex API with Python in Google Documentation, there is documentation to Get Lake, Zone, Assets, etc. I've explored that documentation, but I didn't find any documentation related to Tag Policies, for example, I need to attach my Tag Template and add Policy Tag to my BigQuery Table via API.
Is it possible to attach Tag Template and add Policy Tag into BigQuery Table via API?
Here is the link that I've explored:
Dataplex API Service
Dataplex API Metadata Service
Data Catalog API
for attaching tag templates to BigQuery table, first you will have to lookup the entry in dataplex using api
https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.types.LookupEntryRequest
and then attach is to table using api
https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.types.CreateTagRequest
here's sample code, this creates tag template and also attaches it to table in same code base
https://cloud.google.com/data-catalog/docs/samples/data-catalog-quickstart
and to attach policy, use this api
https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.types.CreatePolicyTagRequest
hope this helps
again.
To simulate the pythonic api's behaviour, I used google cloud api explorer to explain in detail. see below.
The entry lookup is to search for object(s) you want to attach a tag/tag templates
Basically here's how I simulated the api calls using api explorer
To attach a tag to a BigQuery table, first step is to search the table up using Datacatalog api url below
LookupEntryRequest
The parameters I passed to get below response is
sqlResource: "bigquery.table.myproject.zz_DataSet.tblOne"
Above should give you output as
{
"name": "projects/myproject/locations/australia-southeast2/entryGroups/#bigquery/entries/mykey",
"type": "TABLE",
"schema": {
"columns": [
{
"type": "STRING",
"mode": "NULLABLE",
"column": "firstname"
},
{
"type": "STRING",
"mode": "NULLABLE",
"column": "lastname"
}
]
},
"sourceSystemTimestamps": {
"createTime": "2023-01-16T04:22:49.397Z",
"updateTime": "2023-01-16T04:22:49.397Z"
},
"linkedResource": "//bigquery.googleapis.com/projects/myproject/datasets/zz_DataSet/tables/tblOne",
"bigqueryTableSpec": {
"tableSourceType": "BIGQUERY_TABLE"
},
"usageSignal": {
"updateTime": "2023-02-05T07:59:59.928Z",
"usageWithinTimeRange": {
"30D": {
"totalCompletions": 7,
"totalFailures": 1,
"totalExecutionTimeForCompletionsMillis": 7385
}
}
},
"integratedSystem": "BIGQUERY",
"fullyQualifiedName": "bigquery:myproject.zz_DataSet.tblOne"
}
The search gives you ability to query multiple tables or attach tags at Dataset level too, see parameters section on link above.
This is why I suggest you use entry look up first as its more scalable code.
API Call two: This is how i simulated the attach tag to resource. If you go to link below
CreateTagRequest
As an example: I pre created an tag template from console and then used the template-id value to pass as a parameter to the request
Input:
parent: projects/myproject/locations/australia-southeast2/entryGroups/#bigquery/entries/mykey from above name element
request body:
{
"template": "projects/myproject/locations/australia-southeast1/tagTemplates/api_call_test_tag_template",
"fields": {
"name": {
"stringValue": "apitestcall"
}
}
}
Output:
Below is the response generated and if you see in data catalog console, you will see bigquery table with the tag template attached to it with value to name field as "apitestcall" attached to it. see image attached
{
"name": "projects/myproject/locations/australia-southeast2/entryGroups/#bigquery/entries/mykey/tags/tagsKey",
"template": "projects/myproject/locations/australia-southeast1/tagTemplates/api_call_test_tag_template",
"fields": {
"name": {
"displayName": "name",
"stringValue": "apitestcall"
}
},
"templateDisplayName": "api-call-test-tag-template"
}
Finally, please do make sure you have all the correct IAM permissions required for this task.
what is an elegant way to access the urls of tweets pictures with tweepy v2?
Twitter released a v2 of their API and tweepy adjusted their python module to it (tweepy v2).
Lets say for example I have a dataframe of tweets created with tweepy, holding tweet id and so on. There is a row with this example tweet:
https://twitter.com/federalreserve/status/1501967052080394240
The picture is saved under a different url and the documentation on tweepy v2 does reveal if there is a way to access it.
https://pbs.twimg.com/media/FNgO9vNXIAYy2LK?format=png&name=900x900
Reading thought the requests json obtained through tweepy.Client(bearer_token, return_type = requests.Response) did not hold the required links.
I am already using the following parameters in the client:
client.get_liked_tweets(user_id,
tweet_fields = ['created_at', 'text', 'id', 'attachments', 'author_id', 'entities'],
media_fields=['preview_image_url', 'url'],
user_fields=['username'],
expansions=['attachments.media_keys', 'author_id']
)
What would be a way to obtain or generate the link to the picture of the tweet? Preferably through tweepy v2 itself?
Thank you in advance.
The arguments of get_liked_tweets seem to be correct.
Have you looked in the includes dict at the root of the response?
The media fields are not in data, so you should have something like that:
{
"data": {
"attachments": {
"media_keys": [
"16_1211797899316740096"
]
},
"author_id": "2244994945",
"id": "1212092628029698048",
"text": "[...]"
},
"includes": {
"media": [
{
"media_key": "16_1211797899316740096",
"preview_image_url": "[...]",
"url": "[...]"
}
],
"users": [
{
"id": "2244994945",
"username": "TwitterDev"
}
]
}
}
I am testing the youtube API with the example code they have given on their website and trying to post a comment on the videos via video ID only.
Everything works fine and when I visit the page with my account I see my comment there, except when i go to that specific video with different account than i can't find my comment or just open youtube in incognito window it doesn't show up.
On the other hand, if I manually post the comment myself it shows up everywhere, but I want to be able to do it through the API.
And I have tried it for like 40-50 Times.
My code for this:-
def insert_new_comment(youtube, video_id, comment,channelId):
request = youtube.commentThreads().insert(
part="snippet",
body={
"snippet": {
"channelId": channelId,
"videoId": video_id,
"topLevelComment": {
"snippet": {
"textOriginal": comment
}
}
}
}
)
response = request.execute()
After executing response = request.execute(), the result of the comment insertion should be inspected. A commentThread resource for the new comment should be returned if the request was successful. This resource should look something like:
{
"kind": "youtube#commentThread",
"etag": etag,
"id": string,
"snippet": {
"channelId": string,
"videoId": string,
"topLevelComment": comments Resource,
"canReply": boolean,
"totalReplyCount": unsigned integer,
"isPublic": boolean
},
"replies": {
"comments": [
comments Resource
]
}
}
In this case, the returned commentThread includes a snippet.topLevelComment.snippet.moderationStatus element, set to heldForReview, which indicates that the comment has not yet been posted as it must be subject to review by a moderator.
I am playing with Facebook Graph API. I am interested in getting results like pages search.
I can look for places like this:
search?q=audi&type=place
and I get json with data I want, like website, location etc.
Example:
{{
""datadata": ": [[
{{
"category": "Automotive Parts Store",
"category_list": [
{
"id": "139492049448901",
"name": "Automotive Parts Store"
}
],
"location": {
"city": "Belgrade",
"country": "Serbia",
"latitude": 44.8206,
"longitude": 20.4622,
"zip": "11000"
},
But when I want to search pages and get public page data in json with a request like this:
search?q=audi&type=page
this is json I get:
{
"data": [
]
}
What's happening here, why can't I do the search and is there a way to achieve this?
I am reading Facebook API documentation for two days now and I can't find anything relevant.
Python code that I am using (token removed)
import facebook
token = "////////"
graph = facebook.GraphAPI(access_token=token, version = 2.7)
events = graph.request("/search?type=page&q=cafe&fields=about,fan_count,website")
print(events)
Use the pages/search endpoint.
Here is the api end point, use with valid access token:
https://graph.facebook.com/v3.2/pages/search?q=m&fields=id,is_eligible_for_branded_content,is_unclaimed,link,location,name,verification_status
I've been playing around with the Facebook API for a bit now, and I think I got it working quite well, however, when fetching my friends list I bump into a limit of 270 (271?) items returned with a paging key in the json data.
Naturally I try to iterate threw the next page in the paging key, however, the array returned from the next page is empty, it contains a next and previous key but no actual data, anyone know what's wrong?
Tried it straight in the browser just to ignore all programming errors and it's the same there as it is in the code:
https://graph.facebook.com/me/friends?access_token=[ACCESS_TOKEN]&limit=5000
I've also tried with &offset=269 etc, nothing really works, here's the output:
{
"data": [
{
"name": "Person A",
"id": "..."
},
{
"name": "Person B",
"id": "..."
},
{
"name": "Person C",
"id": "..."
}
],
"paging": {
"next": "https://graph.facebook.com/me/friends?limit=5000&offset=5268&value=1&access_token=[ACCESS_TOKEN]&__after_id=[Person C ID]",
"previous": " Previous URL ... "
}
}
When trying this URL in the browser (or via code), I get this:
{
"data": [
],
"paging": {
"previous": "https://graph.facebook.com/me/friends?limit=5000&offset=268&value=1&access_token=[ACCESS_TOKEN]"
}
}
Why is this and how do you go about fixing it?
Appreciate all the help i can get, thank you!
Edit: I have 284 friends, so there should be 10+ on the "next" paging.
(Programming done in Python via the "Official Python SDK" (modified to handle paging)
Maybe it's cos you might only have 270 (271) friends? Unless you are sure you have more friends...
If you have more than 270 friends. There could be two other reasons:
Those 14 users have prevented apps from accessing their data via the API
Facebook has cached your friend's list and you need to wait for the cache to be updated.