I am playing with Facebook Graph API. I am interested in getting results like pages search.
I can look for places like this:
search?q=audi&type=place
and I get json with data I want, like website, location etc.
Example:
{{
""datadata": ": [[
{{
"category": "Automotive Parts Store",
"category_list": [
{
"id": "139492049448901",
"name": "Automotive Parts Store"
}
],
"location": {
"city": "Belgrade",
"country": "Serbia",
"latitude": 44.8206,
"longitude": 20.4622,
"zip": "11000"
},
But when I want to search pages and get public page data in json with a request like this:
search?q=audi&type=page
this is json I get:
{
"data": [
]
}
What's happening here, why can't I do the search and is there a way to achieve this?
I am reading Facebook API documentation for two days now and I can't find anything relevant.
Python code that I am using (token removed)
import facebook
token = "////////"
graph = facebook.GraphAPI(access_token=token, version = 2.7)
events = graph.request("/search?type=page&q=cafe&fields=about,fan_count,website")
print(events)
Use the pages/search endpoint.
Here is the api end point, use with valid access token:
https://graph.facebook.com/v3.2/pages/search?q=m&fields=id,is_eligible_for_branded_content,is_unclaimed,link,location,name,verification_status
Related
I'm on exploration Dataplex API with Python in Google Documentation, there is documentation to Get Lake, Zone, Assets, etc. I've explored that documentation, but I didn't find any documentation related to Tag Policies, for example, I need to attach my Tag Template and add Policy Tag to my BigQuery Table via API.
Is it possible to attach Tag Template and add Policy Tag into BigQuery Table via API?
Here is the link that I've explored:
Dataplex API Service
Dataplex API Metadata Service
Data Catalog API
for attaching tag templates to BigQuery table, first you will have to lookup the entry in dataplex using api
https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.types.LookupEntryRequest
and then attach is to table using api
https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.types.CreateTagRequest
here's sample code, this creates tag template and also attaches it to table in same code base
https://cloud.google.com/data-catalog/docs/samples/data-catalog-quickstart
and to attach policy, use this api
https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.types.CreatePolicyTagRequest
hope this helps
again.
To simulate the pythonic api's behaviour, I used google cloud api explorer to explain in detail. see below.
The entry lookup is to search for object(s) you want to attach a tag/tag templates
Basically here's how I simulated the api calls using api explorer
To attach a tag to a BigQuery table, first step is to search the table up using Datacatalog api url below
LookupEntryRequest
The parameters I passed to get below response is
sqlResource: "bigquery.table.myproject.zz_DataSet.tblOne"
Above should give you output as
{
"name": "projects/myproject/locations/australia-southeast2/entryGroups/#bigquery/entries/mykey",
"type": "TABLE",
"schema": {
"columns": [
{
"type": "STRING",
"mode": "NULLABLE",
"column": "firstname"
},
{
"type": "STRING",
"mode": "NULLABLE",
"column": "lastname"
}
]
},
"sourceSystemTimestamps": {
"createTime": "2023-01-16T04:22:49.397Z",
"updateTime": "2023-01-16T04:22:49.397Z"
},
"linkedResource": "//bigquery.googleapis.com/projects/myproject/datasets/zz_DataSet/tables/tblOne",
"bigqueryTableSpec": {
"tableSourceType": "BIGQUERY_TABLE"
},
"usageSignal": {
"updateTime": "2023-02-05T07:59:59.928Z",
"usageWithinTimeRange": {
"30D": {
"totalCompletions": 7,
"totalFailures": 1,
"totalExecutionTimeForCompletionsMillis": 7385
}
}
},
"integratedSystem": "BIGQUERY",
"fullyQualifiedName": "bigquery:myproject.zz_DataSet.tblOne"
}
The search gives you ability to query multiple tables or attach tags at Dataset level too, see parameters section on link above.
This is why I suggest you use entry look up first as its more scalable code.
API Call two: This is how i simulated the attach tag to resource. If you go to link below
CreateTagRequest
As an example: I pre created an tag template from console and then used the template-id value to pass as a parameter to the request
Input:
parent: projects/myproject/locations/australia-southeast2/entryGroups/#bigquery/entries/mykey from above name element
request body:
{
"template": "projects/myproject/locations/australia-southeast1/tagTemplates/api_call_test_tag_template",
"fields": {
"name": {
"stringValue": "apitestcall"
}
}
}
Output:
Below is the response generated and if you see in data catalog console, you will see bigquery table with the tag template attached to it with value to name field as "apitestcall" attached to it. see image attached
{
"name": "projects/myproject/locations/australia-southeast2/entryGroups/#bigquery/entries/mykey/tags/tagsKey",
"template": "projects/myproject/locations/australia-southeast1/tagTemplates/api_call_test_tag_template",
"fields": {
"name": {
"displayName": "name",
"stringValue": "apitestcall"
}
},
"templateDisplayName": "api-call-test-tag-template"
}
Finally, please do make sure you have all the correct IAM permissions required for this task.
what is an elegant way to access the urls of tweets pictures with tweepy v2?
Twitter released a v2 of their API and tweepy adjusted their python module to it (tweepy v2).
Lets say for example I have a dataframe of tweets created with tweepy, holding tweet id and so on. There is a row with this example tweet:
https://twitter.com/federalreserve/status/1501967052080394240
The picture is saved under a different url and the documentation on tweepy v2 does reveal if there is a way to access it.
https://pbs.twimg.com/media/FNgO9vNXIAYy2LK?format=png&name=900x900
Reading thought the requests json obtained through tweepy.Client(bearer_token, return_type = requests.Response) did not hold the required links.
I am already using the following parameters in the client:
client.get_liked_tweets(user_id,
tweet_fields = ['created_at', 'text', 'id', 'attachments', 'author_id', 'entities'],
media_fields=['preview_image_url', 'url'],
user_fields=['username'],
expansions=['attachments.media_keys', 'author_id']
)
What would be a way to obtain or generate the link to the picture of the tweet? Preferably through tweepy v2 itself?
Thank you in advance.
The arguments of get_liked_tweets seem to be correct.
Have you looked in the includes dict at the root of the response?
The media fields are not in data, so you should have something like that:
{
"data": {
"attachments": {
"media_keys": [
"16_1211797899316740096"
]
},
"author_id": "2244994945",
"id": "1212092628029698048",
"text": "[...]"
},
"includes": {
"media": [
{
"media_key": "16_1211797899316740096",
"preview_image_url": "[...]",
"url": "[...]"
}
],
"users": [
{
"id": "2244994945",
"username": "TwitterDev"
}
]
}
}
I use docusign every day and have set up a google sheet that auto populates the email subject and body.
I'm still having to copy and paste this data into the 'email subject' & 'email message'. See screenshot example.
Is there a way i can reduce this process to the click of a button?
Absolutely. Here's are two rough flows that would be helpful to pursue.
You could make use of the Google Apps Script to push data directly to DocuSign. Downside here is that you'd be writing the code in JavaScript.
You can write a short python app yourself, which uses the Google Sheets API (via python sdk) to retrieve your data, then pass along the appropriate info to the DocuSign eSignature API (via python sdk).
In either case, once you have access to the data you wish to forward to DocuSign, you will want to first authenticate your account to make API requests on your behalf.
Then you can make a request like this:
POST /envelopes
{
"emailSubject": "Please sign these documents",
"emailBlurb": "Thank you for subscribing. Click the link to sign",
"documents": [
{
"documentBase64": "JVBER...iUlRU9GCg==",
"name": "Test Doc",
"fileExtension": "pdf",
"documentId": "1"
}
],
"recipients": {
"signers": [
{
"email": "test#test.com",
"name": "Test User",
"recipientId": "1",
"routingOrder": "1",
"tabs": {
"numberTabs": [
{
"tabLabel": "PO #",
"locked": "false",
"xPosition": "200",
"yPosition": "200",
"documentId": "1",
"pageNumber": "1"
}
]
}
}
]
},
"status": "sent"
}
I have written this Python code to retrieve all Facebook pages that are written in Arabic:
import facebook # pip install facebook-sdk
import json
import codecs
from prettytable import PrettyTable
from collections import Counter
# A helper function to pretty-print Python objects as JSON
def pp(o):
print json.dumps(o, indent=1)
# Create a connection to the Graph API with your access token
ACCESS_TOKEN = ''#my access token
g = facebook.GraphAPI(ACCESS_TOKEN)
s=g.request('search', { 'q' : '&',
'type' : 'page',
'limit' : 5000 ,
'locale' : 'ar_AR' })
pp(s)
The locale parameter should return all pages written in Arabic. However, as the output below shows, I get results that contain English. What am I doing incorrectly?
{
"paging": {
"data": [
{
"category": "\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0628\u064a\u0639 \u0628\u0627\u0644\u062a\u062c\u0632\u0626\u0629 \u0648\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0645\u0633\u062a\u0647\u0644\u0643\u064a\u0646",
"name": "Stop & Shop",
"category_list": [
{
"id": "169207329791658",
"name": "\u0645\u062d\u0644 \u0628\u0642\u0627\u0644\u0629"
}
],
"id": "170000993071234"
},
{
"category": "\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0628\u064a\u0639 \u0628\u0627\u0644\u062a\u062c\u0632\u0626\u0629 \u0648\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0645\u0633\u062a\u0647\u0644\u0643\u064a\u0646",
"name": "C&A",
"category_list": [
{
"id": "186230924744328",
"name": "\u0645\u062a\u062c\u0631 \u0645\u0644\u0627\u0628\u0633"
}
],
"id": "109345009145382"
},
Your query is 100% correct and should only return Arabic posts. Unfortunately, this is a known Facebook Graph Search API bug. It looks like it flips back and forth from working to not working.
See the discussions, https://developers.facebook.com/bugs/294623187324442 and https://developers.facebook.com/bugs/409365862525282
I had similar issues working with the Facebook Graph API, it never seems to work quite right.
I've been playing around with the Facebook API for a bit now, and I think I got it working quite well, however, when fetching my friends list I bump into a limit of 270 (271?) items returned with a paging key in the json data.
Naturally I try to iterate threw the next page in the paging key, however, the array returned from the next page is empty, it contains a next and previous key but no actual data, anyone know what's wrong?
Tried it straight in the browser just to ignore all programming errors and it's the same there as it is in the code:
https://graph.facebook.com/me/friends?access_token=[ACCESS_TOKEN]&limit=5000
I've also tried with &offset=269 etc, nothing really works, here's the output:
{
"data": [
{
"name": "Person A",
"id": "..."
},
{
"name": "Person B",
"id": "..."
},
{
"name": "Person C",
"id": "..."
}
],
"paging": {
"next": "https://graph.facebook.com/me/friends?limit=5000&offset=5268&value=1&access_token=[ACCESS_TOKEN]&__after_id=[Person C ID]",
"previous": " Previous URL ... "
}
}
When trying this URL in the browser (or via code), I get this:
{
"data": [
],
"paging": {
"previous": "https://graph.facebook.com/me/friends?limit=5000&offset=268&value=1&access_token=[ACCESS_TOKEN]"
}
}
Why is this and how do you go about fixing it?
Appreciate all the help i can get, thank you!
Edit: I have 284 friends, so there should be 10+ on the "next" paging.
(Programming done in Python via the "Official Python SDK" (modified to handle paging)
Maybe it's cos you might only have 270 (271) friends? Unless you are sure you have more friends...
If you have more than 270 friends. There could be two other reasons:
Those 14 users have prevented apps from accessing their data via the API
Facebook has cached your friend's list and you need to wait for the cache to be updated.