Consider a Twitter user who has 1 million followers. I want to collect a random page of followers of this user. Is there any way to do this? I don't want to get the list of all followers because it would exhaust my tokens.
I'm looking for something like this:
follower_ids = api.followers_ids(user_id, page=page_index)
where page_index is a random page.
Thanks.
From the documentation, you can get the amount of followers from a user, e.g. twitterdev:
https://api.twitter.com/1.1/users/show.json?screen_name=twitterdev
This returns, amongst other things, followers_count:
{
...
"followers_count": 143916,
...
}
From there you can work out how which pages the random followers are on.
Just to be clear, I've never used twitter, or the API, and have just searched for you.
you can select for example 20 ids from all ids like this:
SCREEN_NAME= "the_user_name"
followersIds = api.followers_ids(SCREEN_NAME)
print followersIds[:20]
Related
I am having trouble obtaining friends_count and favorites_count using the search_all_tweets Tweepy V2 API call.
GeeksForGeeks lists friends_count and favorites_count as attributes ( https://www.geeksforgeeks.org/python-user-object-in-tweepy/). Unfortunately, I get an Attribute Error raise AttributeError from None with the last 2 lines of code.
user.public_metrics only consists of followers_count,following_count,tweet_count, and listed_count.
user.entities consist of extraneous url data.
Code is shown below:
client = tweepy.Client(bearer_token=config.BEARER_TOKEN, consumer_key=
config.CONSUMER_KEY,consumer_secret= config.CONSUMER_SECRET,access_token=
config.ACCESS_TOKEN,access_token_secret= config.ACCESS_TOKEN_SECRET)
for response in tweepy.Paginator(client.search_all_tweets, query=s,
tweet_fields=['context_annotations','created_at', 'public_metrics','author_id', 'lang', 'geo', 'entities'],
user_fields=['username','entities','public_metrics','location','verified','description'],
max_results=100, expansions='author_id'):
for user in response.includes["users"]:
print(user.public_metrics)
print(user.entities)
print(user.friends_count)
print(user.favorites_count)
The fields listed by GeeksForGeeks are the User's fields in the Twitter V1 API.
There is unfortunately no way to get the number of likes of an User with the Twitter V2 API. You can try to get all his likes and count the total number of returned tweets, but that will work only if the User has only a few likes (and that will consume your monthly Tweet cap).
And friends was the previous name of followings, so the equivalent of friends_count in the Twitter V2 API is following_count. If you were looking for the mutuals, you have to get the full list of followers and the full list of followings of the user and count the number of common elements.
Finally, I would advise you to use the Twitter API documentation (here for User objects).
Suppose I have a example Twitter account whose username is #testaccount and I want to grab the followers of the account #testaccount and with that also want to grab the followers of the users who follow #testaccount.
My problem is I don't know how to store this and map this into a database or into dict/list. If yes then how can I do it? Because I have to work with all data including the followers of followers.
For just an example say I have some data that looks like:
I want to store all these data in a way so that I can access each of them under each followers.
I would make a custom class, TwitterAccount:
class TwitterAccount:
def __init__(self, name):
this.name = name
this.followers = []
def addFollowers(self, followers):
this.followers.extend(followers)
# Will return a list of the followers.
def getFollowers(self):
return this.followers
# more functionality can be added
And you can use it like this:
acc0 = TwitterAccount("BradPitt")
acc1 = TwitterAccount("BradPittFan1")
acc2 = TwitterAccount("BradPittFan2")
acc3 = TwitterAccount("BradPittFan1Mother")
acc0.addFollowers([acc1,acc2])
acc1.addFollowers([acc3])
# more code
More on classes:
A nice tutorial and intro
Official docs
Miguel Grinberg has a very good tutorial on flask, where he also goes into detail on the follower/followed structure using SQL databases: https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-viii-followers
His solution is construct two tables, a user table which contains the user ID and posts, etc., and a auxiliary table, which maps user IDs (followed) to other user IDs (followers):
Image from https://blog.miguelgrinberg.com/static/images/mega-tutorial/ch08-followers-schema.png
I want to count how many times a friend has liked a user's post using Python.
I have successfully fetched the names of friends who have liked the posts. But there are some posts which don't have any like at all. The code gives an error there. I'm using facebook-sdk 1.0.0. Any help would be much appreciated.
Code snippet is:
import facebook
graph=facebook.GraphAPI(access_token="")
posts=graph.get_connections(id="me",connection_name="posts")
for w in range (0,5):
p1=posts['data'][w]['likes']['data']
for i in range (0,len(p1)):
r=p1[i]['name']
print(r)
Also, Is there any way to find out the list of friends hitting maximum likes on all posts?
I am a little unsure on exactly what you are missing here or what your question is, but it looks like all you need to do is define the name you want to look for and count how many times it occurs in your code.
Something like this:
import facebook
likes = 0 # How many times they likes on your posts
friend = "Bob" # Your friends name
graph=facebook.GraphAPI(access_token="")
posts=graph.get_connections(id="me",connection_name="posts")
for w in range (0,5):
p1=posts['data'][w]['likes']['data']
for i in range (0,len(p1)):
if friend == p1[i]['name']
likes += 1
print likes
Also, you could clean up your code considerably if you ditched the ranges for your for loops and instead iterated over the data structures themselves. Something like this would work:
import facebook
likes = 0 # How many times they likes on your posts
friend = "Bob" # Your friends name
graph=facebook.GraphAPI(access_token="")
posts=graph.get_connections(id="me",connection_name="posts")
for user_post in posts['data']:
for name in user_post['likes']['data']:
if friend == name['name']
likes += 1
I'm a nub when it comes to python. I literally just started today and have little understanding of programming. I have managed to make the following code work:
from twitter import *
config = {}
execfile("config.py", config)
twitter = Twitter(
auth = OAuth(config["access_key"], config["access_secret"],
config["consumer_key"], config["consumer_secret"]))
user = "skiftetse"
results = twitter.statuses.user_timeline(screen_name = user)
for status in results:
print "(%s) %s" % (status["created_at"], status["text"].encode("ascii",
"ignore"))
The problem is that it's only printing 20 results. The twitter page i'd like to get data from has 22k posts, so something is wrong with the last line of code.
screenshot
I would really appreciate help with this! I'm doing this for a research sentiment analysis, so I need several 100's to analyze. Beyond that it'd be great if retweets and information about how many people re tweeted their posts were included. I need to get better at all this, but right now I just need to meet that deadline at the end of the month.
You need to understand how the Twitter API works. Specifically, the user_timeline documentation.
By default, a request will only return 20 Tweets. If you want more, you will need to set the count parameter to, say, 50.
e.g.
results = twitter.statuses.user_timeline(screen_name = user, count = 50)
Note, count:
Specifies the number of tweets to try and retrieve, up to a maximum of 200 per distinct request.
In addition, the API will only let you retrieve the most recent 3,200 Tweets.
Using an access token from the Facebook Graph API Explorer (https://developers.facebook.com/tools/explorer), with access scope which includes user likes, I am using the following code to try to get all the likes of a user profile:
myfbgraph = facebook.GraphAPI(token)
mylikes = myfbgraph.get_connections(id="me", connection_name="likes")['data']
for like in mylikes:
print like['name'], like['category']
...
However this is always giving me only 25 likes, whereas I know that the profile I'm using has 42 likes. Is there some innate limit operating here, or what's the problem in getting ALL the page likes of a user profile?
Per the Graph documention:
When you make an API request to a node or edge, you will usually not
receive all of the results of that request in a single response. This
is because some responses could contain thousands and thousands of
objects, and so most responses are paginated by default.
https://developers.facebook.com/docs/graph-api/using-graph-api/v2.2#paging
Well, this appears to work (a method, which accepts a user's facebook graph):
def get_myfacebook_likes(myfacebook_graph):
myfacebook_likes = []
myfacebook_likes_info = myfacebook_graph.get_connections("me", "likes")
while myfacebook_likes_info['data']:
for like in myfacebook_likes_info['data']:
myfacebook_likes.append(like)
if 'next' in myfacebook_likes_info['paging'].keys():
myfacebook_likes_info = requests.get(myfacebook_likes_info['paging']['next']).json()
else:
break
return myfacebook_likes
The above answers will work, but pretty slowly for anything with many likes. If you just want the count for number of likes, you can get it much more efficiently with total_likes:
myfacebook_likes_info = graph.get_connections(post['id'], 'likes?summary=1')
print myfacebook_likes_info["summary"]["total_count"]