Possible duplicate: FB Graph API: Posting as a page, to a different page
Problem Statement:
I was trying to publish a post, say, from ABC page to XYZ page and ended up with this error:
(#200) Posts where the actor is a page cannot also include a target_id other than EVENT or GROUP
Note: I've an access_token of user's page (in this case: ABC).
What I've tried so far:
I'm using facebook_sdk for python to initiate an API calls.
# access_token of page on behalf the post will be published
graph = facebook.GraphAPI(access_token)
response = graph.put_wall_post(
message=facebook_post_data,
profile_id=profile_id # on which post will be published
)
which is equivalent to
{app-id}/feed?access_token=access_token&message=Hello&method=post
manage_pages, publish_pages, publish_actions are already in the scopes to grant!
Question:
Is there no way to solve this issue? or Am I missing something?!?
Because it's possible to publish post to a page from another page using Facebook UI.
Hence, Graph API documentation doesn't contain any explicit info. about the same, If it's not possible!
Thanks for your time!
Related
I'm trying to write some code that downloads content from Moodle website.
the first thing was trying and logging in, but from what I've tried so far, it seems as if I'm not actually being redirected to the page after log in (with the courses data etc...). here's that I've tried
user = 'my_username'
pas = 'my_password'
payload = {'username':user, 'password':pas}
login_site = "https://moodle2.cs.huji.ac.il/nu20/login/index.php?" # actual login webpage
data_site = "https://moodle2.cs.huji.ac.il/nu20" # should be the inner webpage with the courses etc...
with requests.Session() as session:
post = session.post(login_site, data=payload)
r = session.get(data_site)
content = r.text
print(content) # doesn't actually contain the HTML of the main courses page (seems to me its the login page)
any idea why might that happen? would appreciate your help ;)
It is difficult to help without knowing more about the specific site you are trying to log into.
One thing that's worth a try is changing
session.post(login_site, data=payload)
to
session.post(login_site, json=payload)
When the data parameter is used, the content-type header is not set to "application/json". Some sites will reject the POST based on this.
I've also run into sites which have protections against logins from scripts. They may require an additional token to be sent in the POST.
If all else fails, you could consider using selenium. Selenium allows you to control a browser instance programmatically You can simply load the page and send text input to the username and password fields on the login page. This would also get you access to any content which is rendered client side via javascript. However, this may be overkill depending on your use case.
I am trying to create a program that will allow workers at a company to automatically add information to a digital noticeboard which is connected to a Raspberry Pi. They'll submit information on an online form and then a python-pptx enabled program will turn it into nicely designed PowerPoint slides.
I've managed to get a script that can enter the login information for my Microsoft forms account and print the session using:
import requests
print('starting')
#This URL will be the URL that your login form points to with the "action" tag.
POST_LOGIN_URL = #insert URL for microsoft forms login page with username
#This URL is the page you actually want to pull down with requests.
REQUEST_URL = #insert URL you want in the microsoft forms page (responses)
payload = {
'passwd’: ‘mypassowrd'
#insert your password ('passwd' is the microsoft forms variable name)
}
with requests.Session() as session:
post = session.post(POST_LOGIN_URL, data=payload)
r = session.get(REQUEST_URL)
print(type(r))
print((r.text))
The types of r and of r.text are:
print(type(r))
<class 'requests.models.Response'>
print(r.text)
<class 'str'>
Where REQUEST_URL is the url of the results for the form (page looks like: Microsoft Forms results page). I then want to be able to automatically scrape the information of all the results. This is displayed on a page like this: results printed on microsoft forms page.
My issue is then extracting the information from that url. When I print r.text, I get information from the page, but it more seems to be HTTP formatting and hashing (I can include the output of print(r.text), but is several pages long and more confusing than anything).
I'm trying to find a way to reliably copy specific data from the microsoft forms webpage, but currently don't know a function that is able to do that. Does anyone have any experience with the python requests library? Or has anyone tried anything like this before?
Thanks,
Luke
I used Feedburner to automatically tweet new posts on my blog, but Feedburner stopped working and I want to write my own code to do the same.
The Feedburner tweets looked like this: the post title followed by a short excerpt and a clickable image pointing to the blog post. The URL for the blog post is not included in the body of the tweet.
Can I do the same with the Twitter API, preferably using Python?
I've looked at the docs of both python-twitter PostUpdate() and Twitter API, but I could not achieve the same result. At most I could publish a tweet with the image, but without a link to the blog post.
# api_twitter = twitter.API( ...
msg = "My tweet message body."
img = "https://[...].jpg"
status = api_twitter.PostUpdate(status=msg, media=img)
I've found the problem - I should NOT use the media parameter at all in the PostUpdate() call, and just let Twitter itself fetch all Card data from the URL I provided at the end of status. The Card will be displayed, but not the URL.
I'm trying to exchange a fb API Graph token with a long-lived one according to the explanation given in this link
Here what I do
url = "https://graph.facebook.com/oauth/access_token?grant_type=fb_exchange_token&client_id={"+app_id+"}&client_secret={"+secret+"}&fb_exchange_token={"+old_token+"}"
resp = urllib.urlopen(url).read()
print resp
Here the result
{"error":{"message":"Invalid Client ID","type":"OAuthException","code":101,"fbtrace_id":"Bjvz2LDzJhs"}}
I'm using Python, I know that there is not an official SDK but I just need to crawl some post and save related data.
I don't understand what is happening, why does the client id should be not valid?
PS: the old_token works
thanks in advance!
Here is a piece of code that I use to fetch a web page HTML source (code) by its URL using Google App Engine:
from google.appengine.api import urlfetch
url = "http://www.google.com/"
result = urlfetch.fetch(url)
if result.status_code == 200:
print "content-type: text/plain"
print
print result.content
Everything is fine here, but sometimes I need to get an HTML source of a page from a site where I am registered and can only get an access to that page if I firstly pass my ID and password. (It can be any site, actually, like any mail-account-providing site like Yahoo: https://login.yahoo.com/config/mail?.src=ym&.intl=us or any other site where users get free accounts by firstly getting registered there).
Can I somehow do it in Python (trough "Google App Engine")?
You can check for an HTTP status code of 401, "authorization required", and provide the kind of HTTP authorization (basic, digest, whatever) that the site is asking for -- see e.g. here for more details (there's not much that's GAE specific here -- it's a matter of learning HTTP details and obeying them!-).
As Alex said you can check for status code and see what type of autorization it wants, but you can not generalize it as some sites will not give any hint or only allow login thru a non standard form, in those cases you may have to automate the login process using forms, for that you can use library like twill (http://twill.idyll.org/)
or code a specific form submit for each site.