I have written this Python code to retrieve all Facebook pages that are written in Arabic:
import facebook # pip install facebook-sdk
import json
import codecs
from prettytable import PrettyTable
from collections import Counter
# A helper function to pretty-print Python objects as JSON
def pp(o):
print json.dumps(o, indent=1)
# Create a connection to the Graph API with your access token
ACCESS_TOKEN = ''#my access token
g = facebook.GraphAPI(ACCESS_TOKEN)
s=g.request('search', { 'q' : '&',
'type' : 'page',
'limit' : 5000 ,
'locale' : 'ar_AR' })
pp(s)
The locale parameter should return all pages written in Arabic. However, as the output below shows, I get results that contain English. What am I doing incorrectly?
{
"paging": {
"data": [
{
"category": "\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0628\u064a\u0639 \u0628\u0627\u0644\u062a\u062c\u0632\u0626\u0629 \u0648\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0645\u0633\u062a\u0647\u0644\u0643\u064a\u0646",
"name": "Stop & Shop",
"category_list": [
{
"id": "169207329791658",
"name": "\u0645\u062d\u0644 \u0628\u0642\u0627\u0644\u0629"
}
],
"id": "170000993071234"
},
{
"category": "\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0628\u064a\u0639 \u0628\u0627\u0644\u062a\u062c\u0632\u0626\u0629 \u0648\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0645\u0633\u062a\u0647\u0644\u0643\u064a\u0646",
"name": "C&A",
"category_list": [
{
"id": "186230924744328",
"name": "\u0645\u062a\u062c\u0631 \u0645\u0644\u0627\u0628\u0633"
}
],
"id": "109345009145382"
},
Your query is 100% correct and should only return Arabic posts. Unfortunately, this is a known Facebook Graph Search API bug. It looks like it flips back and forth from working to not working.
See the discussions, https://developers.facebook.com/bugs/294623187324442 and https://developers.facebook.com/bugs/409365862525282
I had similar issues working with the Facebook Graph API, it never seems to work quite right.
Related
what is an elegant way to access the urls of tweets pictures with tweepy v2?
Twitter released a v2 of their API and tweepy adjusted their python module to it (tweepy v2).
Lets say for example I have a dataframe of tweets created with tweepy, holding tweet id and so on. There is a row with this example tweet:
https://twitter.com/federalreserve/status/1501967052080394240
The picture is saved under a different url and the documentation on tweepy v2 does reveal if there is a way to access it.
https://pbs.twimg.com/media/FNgO9vNXIAYy2LK?format=png&name=900x900
Reading thought the requests json obtained through tweepy.Client(bearer_token, return_type = requests.Response) did not hold the required links.
I am already using the following parameters in the client:
client.get_liked_tweets(user_id,
tweet_fields = ['created_at', 'text', 'id', 'attachments', 'author_id', 'entities'],
media_fields=['preview_image_url', 'url'],
user_fields=['username'],
expansions=['attachments.media_keys', 'author_id']
)
What would be a way to obtain or generate the link to the picture of the tweet? Preferably through tweepy v2 itself?
Thank you in advance.
The arguments of get_liked_tweets seem to be correct.
Have you looked in the includes dict at the root of the response?
The media fields are not in data, so you should have something like that:
{
"data": {
"attachments": {
"media_keys": [
"16_1211797899316740096"
]
},
"author_id": "2244994945",
"id": "1212092628029698048",
"text": "[...]"
},
"includes": {
"media": [
{
"media_key": "16_1211797899316740096",
"preview_image_url": "[...]",
"url": "[...]"
}
],
"users": [
{
"id": "2244994945",
"username": "TwitterDev"
}
]
}
}
I have this geojson (all this code is in python)
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"stroke": "#000000",
"fill": "#005776",
"fill-opacity": 1.0
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[-81.26581704096897, 28.37974376331988],
[-81.26601725837781, 28.37977498699149],
[-81.26613780689904, 28.37940694447166],
[-81.26594365491499, 28.3793572200485],
[-81.26581704096897, 28.37974376331988]
]
]
}
}
]
}
I am encoding it like this:
geojson_string = json.dumps(geojson, separators=(',', ':'))
geojson_encoded = urllib.parse.quote(f"{{{geojson_string}}}")
And getting this string:
%7B%22type%22%3A%22FeatureCollection%22%2C%22features%22%3A%5B%7B%22type%22%3A%22Feature%22%2C%22properties%22%3A%7B%22stroke%22%3A%22%23000000%22%2C%22fill%22%3A%22%23005776%22%2C%22fill-opacity%22%3A1%7D%2C%22geometry%22%3A%7B%22type%22%3A%22Polygon%22%2C%22coordinates%22%3A%5B%5B%5B-81.26581704096897%2C28.37974376331988%5D%2C%5B-81.26601725837781%2C28.37977498699149%5D%2C%5B-81.26613780689904%2C28.37940694447166%5D%2C%5B-81.26594365491499%2C28.3793572200485%5D%2C%5B-81.26581704096897%2C28.37974376331988%5D%5D%5D%7D%7D%7D
Then I am making the url like this:
url = f"https://api.mapbox.com/styles/v1/{user}/{style}/static/geojson(geojson_encoded)/auto/640x360?{access_token}"
But I am getting this error:
message: "Failed parsing geojson"
Can someone help me to know what I am doing wrong ?
The problem appears to lie with how your Python code is encoding and/or making the request against the API, so I would recommend you double check that the request being made is what you expect it to be.
If I take your unencoded geojson object, and include it directly within a cURL request (making sure to encode just the necessary forbidden use of # for colors), then I don't have any issues receiving a valid response:
https://api.mapbox.com/styles/v1/mapbox/light-v10/static/geojson({"type":"FeatureCollection","features":[{"type":"Feature","properties":{"stroke":"%23000000","fill":"%23005776","fill-opacity":1},"geometry":{"type": "Polygon","coordinates":[[[-81.26581704096897,28.37974376331988],[-81.26601725837781,28.37977498699149],[-81.26613780689904,28.37940694447166],[-81.26594365491499,28.3793572200485],[-81.26581704096897,28.37974376331988]]]}}]})/auto/630x360?access_token=my.token
The above request responds with your desired image (albeit, without the custom user style):
⚠️ disclaimer: I currently work for Mapbox ⚠️
I am playing with Facebook Graph API. I am interested in getting results like pages search.
I can look for places like this:
search?q=audi&type=place
and I get json with data I want, like website, location etc.
Example:
{{
""datadata": ": [[
{{
"category": "Automotive Parts Store",
"category_list": [
{
"id": "139492049448901",
"name": "Automotive Parts Store"
}
],
"location": {
"city": "Belgrade",
"country": "Serbia",
"latitude": 44.8206,
"longitude": 20.4622,
"zip": "11000"
},
But when I want to search pages and get public page data in json with a request like this:
search?q=audi&type=page
this is json I get:
{
"data": [
]
}
What's happening here, why can't I do the search and is there a way to achieve this?
I am reading Facebook API documentation for two days now and I can't find anything relevant.
Python code that I am using (token removed)
import facebook
token = "////////"
graph = facebook.GraphAPI(access_token=token, version = 2.7)
events = graph.request("/search?type=page&q=cafe&fields=about,fan_count,website")
print(events)
Use the pages/search endpoint.
Here is the api end point, use with valid access token:
https://graph.facebook.com/v3.2/pages/search?q=m&fields=id,is_eligible_for_branded_content,is_unclaimed,link,location,name,verification_status
I have followed the sample examples and created a workspace using IBM watson conversation.
I am using python, and also followed the document api to input the text into watson. However, there is no output as opposed to example.
# watson conversation -na service
import json
from watson_developer_cloud import ConversationV1 as Cv
conversation = Cv(username='XXXX',password='XXXX', version='2017-02-03')
# obtain workspace id
workspace_id = 'Your-ID'
context = {}
response = conversation.message(
workspace_id=workspace_id,
message_input={'text': 'hi'},context)
print(json.dumps(response, indent=2))
Here is the output of json.dumps():
{
"output":{
"text":["hello there, how can i help you?"
],
"nodes_visited":["node_1_1487754696521"
],
"log_messages":[]
},
"intents":[{
"intent":"hello",
"confidence":0.99
}
],
"entities":[],
"input":{
"text":"hi"
},
"context":{
"system":{
"_node_output_map":{
"node_1_1487754696521":[0
]
},
"dialog_turn_counter":1,
"dialog_stack":[{
"dialog_node":"root"
}
],
"dialog_request_counter":1
},
"conversation_id":"b2940af7-73c4-4ca8-81d6-363d18637e8e"
},
"alternate_intents":false
}
I have tried using the test bot in the workspace and it works. However, there is no output here in python. Does anyone knows what is wrong?
Okay,
i suspect that there can only be one output. Following the sample, I have added a conversation_start that is suppose to trigger after every conversation start.
"output": { "text": [ "hello there, how can i help you?" ]
I went to my workspace and deleted it. Now it works.
tldr, it only allows 1 output
I want to upload attachment while creating JIRA ticket, I tried the format below but its not working with syntax error given for the file. Do you have any idea how to create this in a json file ?
data = {
"fields": {
"project": {
"key": project
},
"attachments" : [
"file": "C:\data.txt"
],
"summary": task_summary,
"issuetype": {
"name": "Task"
}
}
}
You can only set issue fields during creation: https://docs.atlassian.com/jira/REST/latest/#d2e2786
You can add attachments to already existing issues by posting the content itself: https://docs.atlassian.com/jira/REST/latest/#d2e2035
Sou you have to do it in two steps.
It works now, i was not using rest/api/2/issue/DEV-290/attachments