Twitter API v2.0 added a conversation_id which supposedly enables easier access to an entire Twitter thread. Previously this kind of work was done by accessing the original tweet and then keeping track of the in_reply_to_status_id field. This is easy for replies to the original tweet, but is more cumbersome for replies to replies.
I suspect I'm missing something simple and appreciate any help. The API documentation says the conversation_id will match the tweet_id of the tweet which started the original conversation. After applying my tokens, the search code is:
replies=[]
for tweet in tweepy.Cursor(api.search_tweets,q='conversation_id:<1565355188004741120>',timeout=999999).items(1000):
replies.append(tweet)
However, the list replies returns empty. I know that conversation_id is valid because I've access the first level replies via legacy code. Any insight to using the conversation_id field is appreciated. There's surprisingly little information out there on its use. Thank you!
SOLUTION
The issue was api v1.0 vs 2.0. The following works:
replies=[]
for tweet in tweepy.Paginator(client.search_recent_tweets, query='conversation_id:1565355188004741120'):
replies.append(tweet)
Related
I have created a twitter bot using the Tweepy API. It works great, it tweets a tweet with a photo attached to it. However, I would like to add "mentions" or "In this photo" #accounts (tags) to the attached image. This is possible on twitter. I have red the Tweepy documentation and searched online, looking for a parameter but I could not find anything. Any suggestion would appriciated!
code snippet:
api = tweepy.API(oauth)
api.update_with_media(filename='screenshot.png', status=masterStatus)
First of all, please do not use update_with_media, as this is a deprecated API. It works for now, but will get no support. The current correct route is the two-step use of media_upload, followed by update_status (you can also add image alt text using create_media_metadata in between those two steps, if you like). The Tweepy 3.9.0 documentation mentions this.
For the main part of the question - the Twitter API itself does not currently support adding people tags to images, so this is also unavailable in Tweepy. If you would like to request this feature in a future version of the Twitter API, you can do so here.
I just started to make a Twitter Api. Normally I don't have a Twitter account, for this api I created one. I Tweeted 4 times, including some mentions. But when I use mentions_timeline like this;
my_mentions = api.mentions_timeline()
#print(my_mentions)
#output: []
After then I do a for loop on my_mentions with parameters text, screen_name but nothing returned.
What I'm doing wrong here? Why it's an empty list since I mentioned some people in the tweets + how can I search mentions for another user? Is there a parameter in mentions_timeline() object like screen_name or id ?
Try using the new Cursor Object as follows:
api = tweepy.API(auth)
for mentions in tweepy.Cursor(api.mentions_timeline).items():
# process mentions here
print mentions.text
as per Twitters documentation here
Returns the 20 most recent mentions (tweets containing a users’s
#screen_name) for the authenticating user.
so you cannot check other users mentions using this method. To achieve this, you will have to uses twitters search api. for tweepy's documentation check here
import tweepy
api = tweepy.API(auth)
api.mentions_timeline()
Try going to your profile using the same of which you're using the API and see whether the mentions exists in your profile.
and try mentioning the twitter account from different account which you trying from.
This might be the case where twitter had limited your activities and replies/tweets are not visible of that account.
I just got started programming a year ago (on and off) and I am trying to save an attachment to a folder offline from my personal GMAIL account.
I was advised to use :
https://developers.google.com/gmail/api/quickstart/python
I set up the
I authenticated the my account and now am trying to get comfortable with this tool.
There are some initial questions that I have
What is User ID ?
is this my email ( tttt#xxxx.xxxx)
or someone else's email (ppppp#yyyy.yyy)
How do I get a email ID's ?
These questions stem from ...
GET https://www.googleapis.com/gmail/v1/users/userId/messages/messageId/attachments/id
from the page :
https://developers.google.com/gmail/api/v1/reference/users/messages/attachments/get
Again I am just learning from a beginner place..
Thanks
Just use "me" as the userId as it says in the doc.
To get a messageId first you have to search (list) messages, using something like:
resp = gmail.users().messages().list(userId="me", q="has:attachment subject:'foo bar' before:"2014-01-05").execute()
you can then iterate through the 'messages' in that resp and
gmail.users().messages().get(userId="me", id=message['id']).execute()
The Gmail API guides are quite helpful, take a look at them, for example:
https://developers.google.com/gmail/api/guides/filtering
I have been trying to figure this out but this is a really frustrating. I'm trying to get tweets with a certain hashtag (a great amount of tweets) using Tweepy. But this doesn't go back more than one week. I need to go back at least two years for a period of a couple of months. Is this even possible, if so how?
Just for the check here is my code
import tweepy
import csv
consumer_key = '####'
consumer_secret = '####'
access_token = '####'
access_token_secret = '####'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
# Open/Create a file to append data
csvFile = open('tweets.csv', 'a')
#Use csv Writer
csvWriter = csv.writer(csvFile)
for tweet in tweepy.Cursor(api.search,q="#ps4",count=100,\
lang="en",\
since_id=2014-06-12).items():
print tweet.created_at, tweet.text
csvWriter.writerow([tweet.created_at, tweet.text.encode('utf-8')])
As you have noticed Twitter API has some limitations, I have implemented a code that do this using the same strategy as Twitter running over a browser. Take a look, you can get the oldest tweets: https://github.com/Jefferson-Henrique/GetOldTweets-python
You cannot use the twitter search API to collect tweets from two years ago. Per the docs:
Also note that the search results at twitter.com may return historical results while the Search API usually only serves tweets from the past week. - Twitter documentation.
If you need a way to get old tweets, you can get them from individual users because collecting tweets from them is limited by number rather than time (so in many cases you can go back months or years). A third-party service that collects tweets like Topsy may be useful in your case as well (shut down as of July 2016, but other services exist).
Found one code that would help retrieve older tweets.
https://github.com/Jefferson-Henrique/GetOldTweets-python
To get old tweets, run the following command in the directory where the code repository got extracted.
python Exporter.py --querysearch 'keyword' --since 2016-01-10 --until 2016-01-15 --maxtweets 1000
And it returned a file 'output_got.csv' with 1000 tweets during the above days with your keyword
You need to install a module 'pyquery' for this to work
PS: You can modify 'Exporter.py' python code file to get more tweet attributes as per your requirement.
2018 update:
Twitter has Premium search APIs that can return results from the beginning of time (2006):
https://developer.twitter.com/en/docs/tweets/search/overview/premium#ProductPackages
Search Tweets: 30-day endpoint → provides Tweets from the previous 30
days.
Search Tweets: Full-archive endpoint → provides complete and instant
access to Tweets dating all the way back to the first Tweet in March
2006.
With an example Python client:
https://github.com/twitterdev/search-tweets-python
Knowing this is a very old question but still, some folks might be facing the same issue.
After some digging, I found out Tweepy's search only returns data for the past 7 days and that some times lead to buy third party service.
I utilised python library, GetOldTweets3 and it worked fine for me. The utility of this library is really easy. The only limitation of this library that we can't search for more than one hashtag in one execution but it works fine to search for multiple accounts at the same time.
use the args "since" and "until" to adjust your timeframe. You are presently using since_id which is meant to correspond to twitter id values (not dates):
for tweet in tweepy.Cursor(api.search,
q="test",
since="2014-01-01",
until="2014-02-01",
lang="en").items():
As others have noted, the Twitter API has the date limitation, but not the actual advanced search as implemented on twitter.com. So so the solution is to use Python's wrapper for Selenium or PhantomJS to iterate through the twitter.com endpoint. Here's an implementation using Selenium that someone has posted on Github: https://github.com/bpb27/twitter_scraping/
I can't believe nobody said this but this git repository completely solved my problem. I haven't been able to utilize other solutions such as GOT or Twitter API Premium.
Try this, definitely useful:
https://betterprogramming.pub/how-to-scrape-tweets-with-snscrape-90124ed006af
https://github.com/MartinBeckUT/TwitterScraper/tree/master/snscrape/cli-with-python
I'm having problems to find a simple python twitter oauth example which shows how to post a user status on Twitter. Could you help me?
Check out Mike Knapp's library on GitHub.
Nice and simple, no install needed.
Here's an example that will get you authed with Twitter using rauth.
After that point all you'd have to do to update the authenticated user's status is:
r = session.post('statuses/update.json',
data={'status': 'Updating my status from the cmd line.'})
print r.json()
(You only need to care about the code up until you retrieve your authenticated session object, i.e. line 20.)
Hope this helps!
Edit
You will need to get your own consumer_key and consumer_secret for this to work because the rauth demo app does not have write permissions, for obvious reasons. So you'll end up with this response if you try to run the modified script without updating the credentials:
{u'request': u'/1/statuses/update.json', u'error': u'Read-only application cannot POST'}
Ensure your application is allowed to write and it should work as expected.
Take a look on tweepy: http://code.google.com/p/tweepy/
Have you checked out http://github.com/simplegeo/python-oauth2 ?
Matthew A. Russell has written an excellent book on this, Mining the social web. Take a look at his excample source for OAuth to twitter. The code is available here, and i recommend his book also, covering not only twitter, but facebook and linkedin aswell.
The code is found here: OAuth to twitter and collect friends id's
Good Luck
Here is a simple twitter oauth example I wrote as a blog sometime ago. Hope this helps.
If you're refering to http://code.google.com/p/python-twitter/ ..
On that page it is documented as
api = twitter.Api(consumer_key='consumer_key', consumer_secret='consumer_secret', access_token_key='access_token',
access_token_secret='access_token_secret')
To see if your credentials
are successful:
print api.VerifyCredentials() {"id": 16133, "location": "Philadelphia", "name": "bear"}
That works. Ofcourse, your consumer key should never be human-readable in your application. But it would work even if it was.
*-pike
I've written an extremely simple twitter client (which is just for tweeting).
The source isn't the cleanest around, but the entire thing (including UI) is under 200 lines, so you should be able to extract anything you need from it: