I'm using tweepy to access a large number of tweets. Many tweets are truncated, so I want to get the full text of some tweets, which I have the id for.
My problem is: The tweepy api instance has one method of downloading multiple tweets at once (api.statuses_lookup), but this returns truncated tweets.
It also has a method that includes the full tweet text (api.get_status), but which afaik only takes one tweet at a time.
Is there way of getting the full text for multiple tweets at once?
import tweepy
consumer_key = "XXX"
secret = "XXX"
auth = tweepy.AppAuthHandler(consumer_key, secret)
auth.secure = True
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
ids = [1108360183586140161, 1108474125486641153]
# Finds tweets (up to 100 at a time), but doesn't contain extended text
foo = api.statuses_lookup(ids)
# Returns tweet, including extended text, but only for one at a time
bar = api.get_status(1108449077937635328, tweet_mode='extended')
As pointed out by Andy Piper, the issue was fixed in a recent update of the Tweepy library, so running
pip install tweepy --upgrade
solves this.
Related
I'm looking into the Twitter Search API, and apparently, it has a count parameter that determines "The number of tweets to return per page, up to a maximum of 100." What does "per page" mean, if I'm for example running a python script like this:
import twitter #python-twitter package
api = twitter.Api(consumer_key="mykey",
consumer_secret="mysecret",
access_token_key="myaccess",
access_token_secret="myaccesssecret")
results = api.GetSearch(raw_query="q=%23myHashtag&geocode=59.347937,18.072433,5km")
print(len(results))
This will only give me 15 tweets in results. I want more, preferably all tweets, if possible. So what should I do? Is there a "next page" option? Can't I just specify the search query in a way that gives me all tweets at once? Or if the number of tweets is too large, some maximum number of tweets?
Tweepy has a Cursor object that works like this:
for tweet in tweepy.Cursor(api.search, q="#myHashtag&geocode=59.347937,18.072433,5km", lang='en', tweet_mode='extended').items():
# handle tweets here
You can find more info in the Tweepy Cursor docs.
With TwitterAPI you would access pages this way:
pager = TwitterPager(api,
'search/tweets',
{'q':'#myHashtag', 'geocode':'59.347937,18.072433,5km'})
for item in pager.get_iterator():
print(item['text'] if 'text' in item else item)
A complete example is here: https://github.com/geduldig/TwitterAPI/blob/master/examples/page_tweets.py
Preferably via Tweepy in Python, I want to obtain from the Twitter API a list of top tweets for a given search query and WOEID place identifier (Yahoo's Where On Earth IDentifier).
In my example, I obtain trending queries for a WOEID id via Tweepy's API.trends_place(id) wrapper for the Twitter REST API's GET trends/place; I then want to print the top tweets for each trending query within this place (same WOEID).
Currently, I obtain tweets for the trending query, but
not within the given place;
not necessarily the "top" tweets (as opposed to, for example, "recent").
How can I add these two restrictions to my search?
MWE:
import tweepy
from tweepy import OAuthHandler
consumer_key = 'YOUR-CONSUMER-KEY'
consumer_secret = 'YOUR-CONSUMER-SECRET'
access_token = 'YOUR-ACCESS-TOKEN'
access_secret = 'YOUR-ACCESS-SECRET'
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth)
locationid = 23424775 # WOEID for Canada
trendqueries = [trend['query'] for trend in api.trends_place(locationid)[0]['trends']]
for trendquery in trendqueries:
print(api.search(q=trendquery))
What I have tried:
I can search by longitude/latitude using Tweepy's API.search(q, geocode), but I do not see an obvious way to search by WOEID.
Partial answer
API.search(q[, lang][, locale][, rpp][, page][, since_id][, geocode][, show_user])
Returns tweets that match a specified query.
Parameters:
geocode – Returns tweets by users located within a given radius of the given latitude/longitude. The location is preferentially taking from the Geotagging API, but will fall back to their Twitter profile. The parameter value is specified by “latitide,longitude,radius”, where radius units must be specified as either “mi” (miles) or “km” (kilometers). Note that you cannot use the near operator via the API to geocode arbitrary locations; however you can use this geocode parameter to search near geocodes directly.
show_user – When true, prepends “:” to the beginning of the tweet. This is useful for readers that do not display Atom’s author field. The default is false.
I am using tweepy python library for fetching data from twitter. I want to get tweet from user1 tweeted to user3 containing word1, word2, #hashText1 etc.
query = "word1 word2 #hashtext1"
max_tweets = 10
searched_tweets = [status for status in tweepy.Cursor(api.search, q=query, from='user2' to='user2').items(max_tweets)]
The problem I am facing is that I am not able to use from in the function as it is keyword in python. When removed from parameter query works fine.
I have looked into source code of the tweepy here. It says from as valid parameter.
Please help.
I'm hoping to track tweets that contain a certain set of words, but not others. For example, if my filter is: "taco" AND ("chicken" OR "beef").
It should return these tweets:
-I am eating a chicken taco.
-I am eating a beef taco.
It should not return these tweets:
-I am eating a taco.
-I am eating a pork taco.
Here is the code I'm currently running:
from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
import time
import json
# authentication data- get this info from twitter after you create your application
ckey = '...' # consumer key, AKA API key
csecret = '...' # consumer secret, AKA API secret
atoken = '...' # access token
asecret = '...' # access secret
# define listener class
class listener(StreamListener):
def on_data(self, data):
try:
print data # write the whole tweet to terminal
return True
except BaseException, e:
print 'failed on data, ', str(e) # if there is an error, show what it is
time.sleep(5) # one error could be that you're rate-limited; this will cause the script to pause for 5 seconds
def on_error(self, status):
print status
# authenticate yourself
auth = OAuthHandler(ckey, csecret)
auth.set_access_token(atoken, asecret)
twitterStream = Stream(auth, listener())
twitterStream.filter(track=["taco"]) # track what you want to search for!
The last line of the code is the part I'm struggling with; if I use:
twitterStream.filter(track=["taco","chicken","beef"])
it will return all tweets containing any of the three words. Other things I've tried, such as:
twitterStream.filter(track=(["taco"&&("chicken","beef")])
return a syntax error.
I'm fairly new to both Python and Tweepy. Both this and this seem like similar queries, but they are related to tracking multiple terms simultaneously, rather than tracking a subset of tweets containing a term. I haven't been able to find anything in the tweepy documentation.
I know another option would be tracking all tweets containing "taco" then filtering by "chicken" or "beef" into my database, but I'm worried about running up against the 1% streaming rate limit if I do a general search and then filter it down within Python, so I'd prefer only streaming the terms I want in the first place from Twitter.
Thanks in advance-
Sam
Twitter does not allow you to be very precise in how keywords are matched. However, the track parameter documentation states that spaces within a keyword are equivelent to logicals ANDS. All of the terms you specify are OR'd together.
So, to achieve your "taco" AND ("chicken" OR "beef") example, you could try the parameters [taco chicken, taco beef]. This would match tweets containing the words taco and chicken, or taco and beef. However, this isn't a perfect solution, as a tweet containing taco, chicken, and beef would also be matched.
Here is my current code
from twitter import *
t = Twitter(auth=OAuth(TWITTER_CONSUMER_KEY, TWITTER_CONSUMER_SECRET,
ACCESS_TOKEN, ACCESS_TOKEN_SECRET))
t.statuses.home_timeline()
query=raw_input("enter the query \n")
data = t.search.tweets(q=query)
for i in range (0,1000):
print data['statuses'][i]['text']
print '\n'
Here, I fetch tweets from all the languages. Is there a way to restrict myself to fetching tweets only in English?
There are at least 4 ways... I put them in the order of simplicity.
After you collect the tweets, the json output has a key/value pair that identifies the language. So you can use something like this to take all language tweets and select only the ones that are from English accounts.
for i in range (0,1000):
if data['statuses'][i][u'lang']==u'en':
print data['statuses'][i]['text']
print '\n'
Another way to collect only tweets that are identified in English, you can use the optional 'lang' parameter to request from the API only English (self-idenfitied) tweets. See details here. If you are using the python-twitter library, you can set the 'lang' parameter in twitter.py.
Use a language recognition package like guess-language.
Or if you want to recognize English text without using the self-identified twitter data (i.e. a chinese account that is writing in English), then you have to do Natural Language Processing. One option. This method will recognize common English words and then mark the text as English.
I try this for farsi:
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
res = api.search('lang','fa')
for i in res:
print( i.lang)