I'm trying to find a way to NOT get the same tweets using search API.
That's what I'm doing:
make a request to the Twitter
Store Tweets
make another request to the Twitter
Store Tweets,
Compare results from 2 and 4
Ideally in step 5 I would get 0, meaning that no overlapping tweets where received. So I'm not asking Twitter server for the same information more than once.
But I think I got stuck in step 3, where I have to make another call. I'm trying to use 'since_id' argument to get tweets after some certain points. But I'm not sure If the value that I'm using is correct.
Code:
import twitter
class Test():
def __init__(self):
self.t_auth()
self.hashtag = ['justinbieber']
self.tweets_1 = []
self.ids_1 = []
self.created_at_1 = []
self.tweet_text_1 = []
self.last_id_1 = ''
self.page_1 = 1
self.tweets_2 = []
self.ids_2 = []
self.created_at_2 = []
self.tweet_text_2 = []
self.last_id_2 = ''
self.page_2 = 1
for i in range(1,16):
self.tweets_1.extend(self.api.GetSearch(self.hashtag, per_page=100, since_id=self.last_id_1, page=self.page_1))
self.page_1 += 1;
print len(self.tweets_1)
for t in self.tweets_1:
self.ids_1.insert(0,t.id)
self.created_at_1.insert(0,t.created_at)
self.tweet_text_1.insert(0,t.text)
self.last_id_1 = t.id
self.last_id_2 = self.last_id_1
for i in range(1,16):
self.tweets_2.extend(self.api.GetSearch(self.hashtag, per_page=100, since_id=self.last_id_2, page=self.page_2))
self.page_2 += 1;
print len(self.tweets_2)
for t in self.tweets_2:
self.ids_2.insert(0,t.id)
self.created_at_2.insert(0,t.created_at)
self.tweet_text_2.insert(0,t.text)
self.last_id_2 = t.id
print 'Total number of tweets in test 1: ', len(self.tweets_1)
print 'Last id of test 1: ', self.last_id_1
print 'Total number of tweets in test 2: ', len(self.tweets_2)
print 'Last id of test 2: ', self.last_id_2
print '##################################'
print '#############OVERLAPING###########'
ids_overlap = set(self.ids_1).intersection(self.ids_2)
tweets_text_overlap = set(self.tweet_text_1).intersection(self.tweet_text_2)
created_at_overlap = set(self.created_at_1).intersection(self.created_at_2)
print 'Ids: ', len(ids_overlap)
print 'Text: ', len(tweets_text_overlap)
print 'Created_at: ', len(created_at_overlap)
print ids_overlap
print tweets_text_overlap
print created_at_overlap
def t_auth(self):
consumer_key="xxx"
consumer_secret="xxx"
access_key = "xxx"
access_secret = "xxx"
self.api = twitter.Api(consumer_key, consumer_secret ,access_key, access_secret)
self.api.VerifyCredentials()
return self.api
if __name__ == "__main__":
Test()
In addition to 'since_id', you can use 'max_id'. From the Twitter API documentation:
Iterating in a result set: parameters such count, until, since_id, max_id allow to control how we iterate through search results, since it could be a large set of tweets.
By setting these values dynamically, you can restrict your search results to not overlap. For example, max_id is set at 1100 and since_id is set at 1000, and then you will have tweets with IDs between those two values.
Related
I am trying to retrieve about 1000 tweets from a search term like 'NFL' using tweepy and storing the tweets into a DataFrame using pandas. My issue is I can't find a way to remove duplicated tweets, I have tried df.drop_duplicates but it only gives me about 100 tweets to work with. Help would be appreciated!
num_needed = 1000
tweet_list = [] # Lists to be added as columns( Tweets, usernames, and screen names) in our dataframe
user_list = []
screen_name_list = []
last_id = -1 # ID of last tweet seen
while len(tweet_list) < num_needed:
try:
new_tweets = api.search(q = 'NFL', count = num_needed, max_id = str(last_id - 1), lang = 'en', tweet_mode = 'extended') # This is the criteria for collecting the tweets that I want. I want to make sure the results are as accurate as possible when making a final analysis.
except tweepy.TweepError as e:
print("Error", e)
break
else:
if not new_tweets:
print("Could not find any more tweets!")
break
else:
for tweet in new_tweets:
# Fetching the screen name and username
screen_name = tweet.author.screen_name
user_name = tweet.author.name
tweet_text = tweet.full_text
tweet_list.append(tweet_text)
user_list.append(user_name)
screen_name_list.append(screen_name)
df = pd.DataFrame() #Create a new dataframe (df) with new columns
df['Screen name'] = screen_name_list
df['Username'] = user_list
df['Tweets'] = tweet_list
Well, yes, when you use .drop_duplicates(), you only get 100 tweets because that's how many duplicates there are. Doesn't matter what technique you use here, there are 900 or so duplicates with how your code runs.
So you might be asking, why? It by default returns only 100 tweets, which I am assuming you are aware of since you are looping and you try to get more by using the max_id parameter. However, your max_id, is always -1 here, you never get the id and thus never change that parameter. So one thing you can do, is while you iterate through the tweets, also collect the ids. Then after you get all the ids, store the minimum id value as last_id, then it'll work in your loop:
Code:
num_needed = 1000
tweet_list = [] # Lists to be added as columns( Tweets, usernames, and screen names) in our dataframe
user_list = []
screen_name_list = []
tw_id = [] #<-- ADDED THIS
last_id = -1 # ID of last tweet seen
while len(tweet_list) < num_needed:
try:
new_tweets = api.search(q = 'NFL -filter:retweets', count = num_needed, max_id = str(last_id - 1), lang = 'en', tweet_mode = 'extended') # This is the criteria for collecting the tweets that I want. I want to make sure the results are as accurate as possible when making a final analysis.
except tweepy.TweepError as e:
print("Error", e)
break
else:
if not new_tweets:
print("Could not find any more tweets!")
break
else:
for tweet in new_tweets:
# Fetching the screen name and username
screen_name = tweet.author.screen_name
user_name = tweet.author.name
tweet_text = tweet.full_text
tweet_list.append(tweet_text)
user_list.append(user_name)
screen_name_list.append(screen_name)
tw_id.append(tweet.id) #<-- ADDED THIS
last_id = min(tw_id) #<-- ADDED THIS
df = pd.DataFrame({'Screen name':screen_name_list,
'Username':user_list,
'Tweets':tweet_list})
df = df.drop_duplicates()
This returns to me aprox 1000 tweets.
Output:
print (len(df))
1084
I am trying to stream live tweets with a given hashtag using tweepy library. I am using the following code taken from https://galeascience.wordpress.com/2016/03/18/collecting-twitter-data-with-python/
I am new to python coding and APIs
import tweepy
from tweepy import OAuthHandler
import json
import datetime as dt
import time
import os
import sys
def load_api():
''' Function that loads the twitter API after authorizing the user. '''
consumer_key = 'xxx'
consumer_secret = 'xxx'
access_token = 'yyy'
access_secret = 'yyy'
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
# load the twitter API via tweepy
return tweepy.API(auth)
def tweet_search(api, query, max_tweets, max_id, since_id, geocode):
''' Function that takes in a search string 'query', the maximum
number of tweets 'max_tweets', and the minimum (i.e., starting)
tweet id. It returns a list of tweepy.models.Status objects. '''
searched_tweets = []
while len(searched_tweets) < max_tweets:
remaining_tweets = max_tweets - len(searched_tweets)
try:
new_tweets = api.search(q=query, count=remaining_tweets,
since_id=str(since_id),
max_id=str(max_id-1))
# geocode=geocode)
print('found',len(new_tweets),'tweets')
if not new_tweets:
print('no tweets found')
break
searched_tweets.extend(new_tweets)
max_id = new_tweets[-1].id
except tweepy.TweepError:
print('exception raised, waiting 15 minutes')
print('(until:', dt.datetime.now()+dt.timedelta(minutes=15), ')')
time.sleep(15*60)
break # stop the loop
return searched_tweets, max_id
def get_tweet_id(api, date='', days_ago=7, query='a'):
''' Function that gets the ID of a tweet. This ID can then be
used as a 'starting point' from which to search. The query is
required and has been set to a commonly used word by default.
The variable 'days_ago' has been initialized to the maximum
amount we are able to search back in time (9).'''
if date:
# return an ID from the start of the given day
td = date + dt.timedelta(days=1)
tweet_date = '{0}-{1:0>2}-{2:0>2}'.format(td.year, td.month, td.day)
tweet = api.search(q=query, count=1, until=tweet_date)
else:
# return an ID from __ days ago
td = dt.datetime.now() - dt.timedelta(days=days_ago)
tweet_date = '{0}-{1:0>2}-{2:0>2}'.format(td.year, td.month, td.day)
# get list of up to 10 tweets
tweet = api.search(q=query, count=10, until=tweet_date)
print('search limit (start/stop):',tweet[0].created_at)
# return the id of the first tweet in the list
return tweet[0].id
def write_tweets(tweets, filename):
''' Function that appends tweets to a file. '''
with open(filename, 'a') as f:
for tweet in tweets:
json.dump(tweet._json, f)
f.write('\n')
def main():
''' This is a script that continuously searches for tweets
that were created over a given number of days. The search
dates and search phrase can be changed below. '''
''' search variables: '''
search_phrases = ['#Messi']
time_limit = 1.5 # runtime limit in hours
max_tweets = 200 # number of tweets per search (will be
# iterated over) - maximum is 100
min_days_old, max_days_old = 1, 5 # search limits e.g., from 7 to 8
# gives current weekday from last week,
# min_days_old=0 will search from right now
# loop over search items,
# creating a new file for each
for search_phrase in search_phrases:
print('Search phrase =', search_phrase)
''' other variables '''
name = search_phrase.split()[0]
json_file_root = name + '/' + name
os.makedirs(os.path.dirname(json_file_root), exist_ok=True)
read_IDs = False
# open a file in which to store the tweets
if max_days_old - min_days_old == 1:
d = dt.datetime.now() - dt.timedelta(days=min_days_old)
day = '{0}-{1:0>2}-{2:0>2}'.format(d.year, d.month, d.day)
else:
d1 = dt.datetime.now() - dt.timedelta(days=max_days_old-1)
d2 = dt.datetime.now() - dt.timedelta(days=min_days_old)
day = '{0}-{1:0>2}-{2:0>2}_to_{3}-{4:0>2}-{5:0>2}'.format(
d1.year, d1.month, d1.day, d2.year, d2.month, d2.day)
json_file = json_file_root + '_' + day + '.json'
if os.path.isfile(json_file):
print('Appending tweets to file named: ',json_file)
read_IDs = True
# authorize and load the twitter API
api = load_api()
# set the 'starting point' ID for tweet collection
if read_IDs:
# open the json file and get the latest tweet ID
with open(json_file, 'r') as f:
lines = f.readlines()
max_id = json.loads(lines[-1])['id']
print('Searching from the bottom ID in file')
else:
# get the ID of a tweet that is min_days_old
if min_days_old == 0:
max_id = -1
else:
max_id = get_tweet_id(api, days_ago=(min_days_old-1))
# set the smallest ID to search for
since_id = get_tweet_id(api, days_ago=(max_days_old-1))
print('max id (starting point) =', max_id)
print('since id (ending point) =', since_id)
''' tweet gathering loop '''
start = dt.datetime.now()
end = start + dt.timedelta(hours=time_limit)
count, exitcount = 0, 0
while dt.datetime.now() < end:
count += 1
print('count =',count)
# collect tweets and update max_id
tweets, max_id = tweet_search(api, search_phrase, max_tweets,
max_id=max_id, since_id=since_id,
geocode=USA)
# write tweets to file in JSON format
if tweets:
write_tweets(tweets, json_file)
exitcount = 0
else:
exitcount += 1
if exitcount == 3:
if search_phrase == search_phrases[-1]:
sys.exit('Maximum number of empty tweet strings reached - exiting')
else:
print('Maximum number of empty tweet strings reached - breaking')
break
if __name__ == "__main__":
main()
It throws the following error:
Traceback (most recent call last):
File "search.py", line 189, in <module>
main()
File "search.py", line 157, in main
since_id = get_tweet_id(api, days_ago=(max_days_old-1))
File "search.py", line 80, in get_tweet_id
tweet = api.search(q=query, count=10, until=tweet_date)
File "/usr/local/lib/python3.5/dist-packages/tweepy/binder.py", line 245, in _call
return method.execute()
File "/usr/local/lib/python3.5/dist-packages/tweepy/binder.py", line 229, in execute
raise TweepError(error_msg, resp, api_code=api_error_code)
tweepy.error.TweepError: [{'code': 215, 'message': 'Bad Authentication data.'}]
I entered the relevant tokens but still it doesn't work. Any help will be appreciated.
It's rare, but it happens sometimes that the application keys need to be regenerated because of something (?) on the back end. I don't know if that's your issue, but it's worth trying.
Also, you are not actually streaming tweets. There is another request for that. You are using Twitter's REST API for searching for tweets that have already occurred.
You can only retrieve 100 user objects per request with the
api.lookup_users() method. Is there an easy way to retrieve more than 100 using Tweepy and Python? I have read this post: User ID to Username tweepy but it does not help with the more than 100 problem. I am pretty novice in Python so I cannot come up with a solution myself. What I have tried is this:
users = []
i = 0
num_pages = 2
while i < num_pages:
try:
# Look up a collection of ids
users.append(api.lookup_users(user_ids=ids[100*i:100*(i+1)-1]))
except tweepy.TweepError:
# We get a tweep error
print('Something went wrong, quitting...')
i = i + 1
where ids is a list containing the ids, but I get IndexError: list index out of range when I try to get a user with index higher than 100. If it helps I am only interested in getting the screen names from the user ids.
You're right that you need to send the tweets to the API in batches of 100, but you're ignoring the fact that you might not have an exact multiple of 100 tweets. Try the following:
import tweepy
def lookup_user_list(user_id_list, api):
full_users = []
users_count = len(user_id_list)
try:
for i in range((users_count / 100) + 1):
full_users.extend(api.lookup_users(user_ids=user_id_list[i*100:min((i+1)*100, users_count)]))
return full_users
except tweepy.TweepError:
print 'Something went wrong, quitting...'
results = lookup_user_list(ids, api)
By taking the minimum of results = lookup_user_list(user_ids, main_api) we ensure the final loop only gets the users left over. results will be a list of the looked-up users.
You may also hit rate limits - when setting up your API, you should take care to let tweepy catch these gracefully and remove some of the hard work, like so:
consumer_key = 'X'
consumer_secret = 'X'
access_token = 'X'
access_token_secret = 'X'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
I haven't tested it since I don't have access to the API.
But if you have a collection of user ids in any range, this should fetch all of them.
It fetches any remainder first, meaning if you have a list of 250 ids, it will fetch 50 users with the last 50 ids in the list.
Then it will fetch the remaining 200 users in batches of hundreds.
from tweepy import api, TweepError
users = []
user_ids = [] # collection of user ids
count_100 = int(len(user_ids) / 100) # amount of hundred user ids
if len(user_ids) % 100 > 0:
for i in range(0, count_100 + 1):
try:
if i == 0:
remainder = len(user_ids) % 100
users.append(api.lookup_users(user_ids=user_ids[:-remainder]))
else:
end_at = i * 100
start_at = end_at - 100
users.append(api.lookup_users(user_ids=user_ids[start_at:end_at]))
except TweepError:
print('Something went wrong, quitting...')
I am using Tweepy to get all tweets made by #UserName. This is the following code
import urllib, json
import sys
import tweepy
from tweepy import OAuthHandler
def twitter_fetch(screen_name = "prateek",maxnumtweets=10):
consumer_token = "" #Keys removed for security
consumer_secret = ""
access_token = ""
access_secret = ""
auth = tweepy.OAuthHandler(consumer_token,consumer_secret)
auth.set_access_token(access_token,access_secret)
api = tweepy.API(auth)
for status in tweepy.Cursor(api.user_timeline,id=screen_name).items(1):
print status['statuses_count']
print '\n'
if __name__ == '__main__':
twitter_fetch('BarackObama',200)
How do I parse the JSON properly to read the Number of statuses made by that particular user ?
How about something that keeps track of how many statuses you've iterated through? I'm not positive how tweepy works, but using something like this:
statuses = 0
for status in tweepy.Cursor(api.user_timeline,id=screen_name).items(1):
print status['statuses_count']
statuses += 1
print '\n'
return statuses
Usually JSON data has a nice structure, with clear formatting like the following, making it easier to understand.
So when I want to iterate through this list to find if an x exists (achievement, in this case), I use this function, which adds 1 to index every iteration it goes through.
def achnamefdr(appid,mykey,steamid64,achname):
playerachurl = 'http://api.steampowered.com/ISteamUserStats/GetPlayerAchievements/v0001/?appid=' + str(appid) + '&key=' + mykey + '&steamid=' + steamid64 + '&l=name'
achjson = json.loads(urllib.request.urlopen(playerachurl).read().decode('utf-8'))
achjsonr = achjson['playerstats']['achievements']
index = 0
for ach in achjsonr:
if not ach['name'].lower() == achname.lower():
index += 1
continue
else:
achnamef = ach['name']
return achnamef, index, True
return 'Invalid Achievement!', index, False
It can be done by getting the JSON object from status._json and then parsing it..
print status._json["statuses_count"]
import requests
import json
# initial message
message = "if i can\'t let it go out of my mind"
# split into list
split_message = message.split()
def decrementList(words):
for w in [words] + [words[:-x] for x in range(1,len(words))]:
url = 'http://ws.spotify.com/search/1/track.json?q='
request = requests.get(url + "%20".join(w))
json_dict = json.loads(request.content)
num_results = json_dict['info']['num_results']
if num_results > 0:
num_removed = len(words) - len(w)
#track_title = ' '.join(words)
track_title = "If I Can't Take It with Me"
for value in json_dict.items():
if value == track_title:
print "match found"
return num_removed, json_dict
num_words_removed, json_dict = decrementList(split_message)
In the code below, I am trying to match the name of a song to my search query. In this particular query, the song will not match, but I have added a variable that will match the song for the returned query. The for loop at the end of the function is supposed to match the track title variable, but I can't figure out why it isn't working. Is there a simple way to find all values for a known key? In this case, the key is "name"
You have to search for the track title, within the tracks dictionary. So, just change your code like this
for value in json_dict["tracks"]:
if value["name"] == track_title:
it would print
match found