It used to be that you could get a True False from exists_friendship with Tweepy through the Twitter API but you now have to use lookup_friendships due to changes last year (which has a 15 per 15 min rate limit I think?)... how would I amend this python code (that used to work) to reflect that and make use of the new function?
It's for a local community/neighbourhood app I just want it to check if someone is following the authenticated user or not.
user_id is the person you want to check if they are following you (already defined earlier in the script)...
00000000000 is your own user_id/the app user's id... but this Tweepy link seems to suggest you just use 'self'?
if api.exists_friendship(user_id, 00000000000):
print 'YAY! Friend!'
else:
print 'Boooo!'
N.B. Tweepy is here
N.B. Needless to say, this does not work:
if api.lookup_friendships(user_id, 00000000000):
So, it turns out that the right request with Tweepy is as follows:
user_id = '00000000000'
print api.show_friendship(target_id=user_id)
That returns JSON for the relationship between the authenticated user and the other user, within which is 'followed_by' that has a True/False value.
Related
I want to put a number of codes in my database so that when a user wants to register my application need to enter a code of those who are in the database, so that the system registry access.
All this in django, but honestly I have no idea how to do it, if you can help me, I would appreciate it very much.
Thanks for your time.
There are a couple of steps that you will need to do, but essentially you are just seeding the DB with valid registration codes. To generate a number of registration codes you can use the following, I limit the number to 1000 but this can be changed
import uuid
codes = set()
num_codes = 1000
while (num_codes > 0):
code = str(uuid.uuid4())
# make sure the randomly gen code isn't a duplicated
while code in codes:
code = str(uuid.uuid4())
# add the newly generated code to the set and dec the counter
codes.add(code)
num_codes -= 1
Next what you will want to do is add the codes to a new table. It would probably make sense to have a table structure like:
{
table_name: registration code,
columns: {
id: int,
code: string,
valid: bool,
}
}
where valid limits the code to a one time registration.
Then when a user tries to register select where the key used = code column and valid = true, if it returns something it is valid otherwise the key is invalid or already used.
Using an access token from the Facebook Graph API Explorer (https://developers.facebook.com/tools/explorer), with access scope which includes user likes, I am using the following code to try to get all the likes of a user profile:
myfbgraph = facebook.GraphAPI(token)
mylikes = myfbgraph.get_connections(id="me", connection_name="likes")['data']
for like in mylikes:
print like['name'], like['category']
...
However this is always giving me only 25 likes, whereas I know that the profile I'm using has 42 likes. Is there some innate limit operating here, or what's the problem in getting ALL the page likes of a user profile?
Per the Graph documention:
When you make an API request to a node or edge, you will usually not
receive all of the results of that request in a single response. This
is because some responses could contain thousands and thousands of
objects, and so most responses are paginated by default.
https://developers.facebook.com/docs/graph-api/using-graph-api/v2.2#paging
Well, this appears to work (a method, which accepts a user's facebook graph):
def get_myfacebook_likes(myfacebook_graph):
myfacebook_likes = []
myfacebook_likes_info = myfacebook_graph.get_connections("me", "likes")
while myfacebook_likes_info['data']:
for like in myfacebook_likes_info['data']:
myfacebook_likes.append(like)
if 'next' in myfacebook_likes_info['paging'].keys():
myfacebook_likes_info = requests.get(myfacebook_likes_info['paging']['next']).json()
else:
break
return myfacebook_likes
The above answers will work, but pretty slowly for anything with many likes. If you just want the count for number of likes, you can get it much more efficiently with total_likes:
myfacebook_likes_info = graph.get_connections(post['id'], 'likes?summary=1')
print myfacebook_likes_info["summary"]["total_count"]
I have a list of a few thousand twitter ids and I would like to check who follows who in this network.
I used Tweepy to get the accounts using something like:
ids = {}
for i in list_of_accounts:
for page in tweepy.Cursor(api.followers_ids, screen_name=i).pages():
ids[i]=page
time.sleep(60)
The values in the dictionary ids form the network I would like to analyze. If I try to get the complete list of followers for each id (to compare to the list of users in the network) I run into two problems.
The first is that I may not have permission to see the user's followers - that's okay and I can skip those - but they stop my program. This is the case with the following code:
connections = {}
for x in user_ids:
l=[]
for page in tweepy.Cursor(api.followers_ids, user_id=x).pages():
l.append(page)
connections[x]=l
The second is that I have no way of telling when my program will need to sleep to avoid the rate-limit. If I put a 60 second wait after every page in this query - my program would take too long to run.
I tried to find a simple 'exists_friendship' command that might get around these issues in a simpler way - but I only find things that became obsolete with the change to API 1.1. I am open to using other packages for Python. Thanks.
if api.exists_friendship(userid_a, userid_b):
print "a follows b"
else:
print "a doesn't follow b, check separately if b follows a"
By friends I mean all of the twitter users who I am following.
Is it possible using tweepy with python 2.7.6 to display a full list of all friends?
I have found it possible to display a list which contains some of my friends with the following code. After handling authorization of course.
api = tweepy.API(auth)
user = api.get_user('MyTwitterHandle')
print "My Twitter Handle:" , user.screen_name
ct = 0
for friend in user.friends():
print friend.screen_name
ct = ct + 1
print "\n\nFinal Count:", ct
This code successfully prints what appears to be my 20 most recent friends on Twitter, the ct variable is equal to 20. This method excludes the rest of the users I am following on Twitter.
Is it possible to display all of the users I am following on twitter? Or at least a way to adjust a parameter to allow me to include more friends?
According to the source code, friends() is referred to the GET friends / list twitter endpoint, which allows a count parameter to be passed in:
The number of users to return per page, up to a maximum of 200. Defaults to 20.
This would allow you to get 200 friends via friends().
Or, better approach would be to use a Cursor which is a paginated way to get all of the friends:
for friend in tweepy.Cursor(api.friends).items():
# Process the friend here
process_friend(friend)
See also:
incomplete friends list
Tweepy Cursor vs Iterative for low API calls
I'm trying to use the Python-Twitter library (https://github.com/bear/python-twitter) to extract mentions of a twitter account using the GetMention() function. The script populates a database and runs periodically on a cron job so I don't want to extract every mention, only those since the last time the script was run.
The code below extracts the mentions fine but for some reason the 'since_id' argument doesn't seem to do anything - the function returns all the mentions every time I run it, rather than filtering for only the most recent mentions. For reference the documentation is here: https://python-twitter.googlecode.com/hg/doc/twitter.html#Api-GetMentions)
What is the correct way to implement the GetMention() function? (I've looked but I can't find any examples online). Alternatively, is there a different/more elegant way of extracting twitter mentions that I'm overlooking?
def scan_timeline():
''' Scans the timeline and populates the database with the results '''
FN_NAME = "scan_timeline"
# Establish the api connection
api = twitter.Api(
consumer_key = "consumerkey",
consumer_secret = "consumersecret",
access_token_key = "accesskey",
access_token_secret = "accesssecret"
)
# Tweet ID of most recent mention from the last time the function was run
# (In actual code this is dynamic and extracted from a database)
since_id = 498404931028938752
# Retrieve all mentions created since the last scan of the timeline
length_of_response = 20
page_number = 0
while length_of_response == 20:
# Retreive most recent mentions
results = api.GetMentions(since_id,None,page_number)
### Additional code inserts the tweets into a database ###
Your syntax seems to be consistent as per mentioned in the Python-Twitter library. What I think is happening is the following:
If the limit of Tweets has occured since the since_id, the since_id will be forced to the oldest ID available.
Which would lead to all the tweets starting from the oldest available ID. Try working with a more recent since ID value. Equivalently, also check whether the since ID you're giving is appropriate or not.