The following code will un-retweet a single tweet by id:
# calling the api
api = tweepy.API(auth)
# the ID of the tweet to be un-retweeted
ID =
# un-retweeting the tweet
api.unretweet(ID)
I want to know how to un-retweet all the tweets that ever been retweeted?
Related
What if I want to take the Tweet ID and media_keys and only get the Tweet ID that has media_keys?
I was trying to do it with this sample but I got stuck:
https://docs.tweepy.org/en/stable/examples.html
client = tweepy.Client(consumer_key=API_KEY, consumer_secret=API_SECRET, access_token=ACCESS_TOKEN, access_token_secret=ACCESS_TOKEN_SECRET, bearer_token=Bearer_token)
counts = 10
search_result = client.get_list_tweets(id='list id', max_results=counts, expansions=["attachments.media_keys"])
tweets = search_result.data
includes = search_result.includes
medias = includes['media']
for tweetid in tweets:
tid = tweetid.id
mediass = {media['media_key']: media for media in medias}
for tweet in tweets:
print(tweet.id, mediass)
You can check the attachments field of Tweet objects to obtain media keys for media attached to the Tweet.
I apologize in advance if I don't know how to search the Tweepy documentation. I am quite new to python/programming in general.
I have written a small script to pull Twitter follower data for an account I manage for work. I would like to investigate when followers added us to see if our posts are increasing engagement. What I cannot figure out is if I can use the Tweepy module to pull this particular information (when the follower added us)?
Thank you in advance for any help. My MWE:
import tweepy
import pandas as pd
# Load API keys
consumer_key = "my_consumer_key"
consumer_secret = "my_consumer_secret"
access_token = "my_access_token"
access_token_secret = "my_access_token_secret"
# Authenticate access to Twitter API
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
# Get the list of followers for the account
followers = api.followers_ids()
# Create a user map
userMap = {}
# Loop over all users
for f in followers:
# create a temporary list
tempList = []
try:
tweets = api.user_timeline(f, count = 33) # pull the 33 most recent tweets
except tweepy.TweepError:
print('Failed to run command.') # Tweepy throws an error if a user hasn't tweeted
# Loop over all tweets per each user f
for t in tweets:
tempList.append(t)
userMap[f] = tempList
# Create lists of pertinent data
dateList = []
favList = []
rtList = []
keyList = []
def genList(tweetList):
for tweets in tweetList:
for t in tweets:
keyList.append(str(t.id))
dateList.append(str(t.created_at))
favList.append(str(t.favorite_count))
rtList.append(str(t.retweet_count))
genList(userMap.values())
# Create a pandas data frame
df = pd.DataFrame(list(zip(keyList, dateList, favList, rtList)),
columns = ['userID', 'created_at', 'favorited', 'retweeted'])
This information is not provided by Twitter.
The followers/list (in Tweepy followers() method) returns a list of User objects. It looks like the only solution is to monitor the changes and manage the history yourself.
I am trying to extract the all tweets which contain specific keyword and its geo locations .
for example , I want download all the tweets in english which contains the keyword 'iphone' from 'france' and 'singapore'
My code
import tweepy
import csv
import pandas as pd
import sys
# API credentials here
consumer_key = 'INSERT CONSUMER KEY HERE'
consumer_secret = 'INSERT CONSUMER SECRET HERE'
access_token = 'INSERT ACCESS TOKEN HERE'
access_token_secret = 'INSERT ACCESS TOKEN SECRET HERE'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True)
# Search word/hashtag value
HashValue = ""
# search start date value. the search will start from this date to the current date.
StartDate = ""
# getting the search word/hashtag and date range from user
HashValue = input("Enter the hashtag you want the tweets to be downloaded for: ")
StartDate = input("Enter the start date in this format yyyy-mm-dd: ")
# Open/Create a file to append data
csvFile = open(HashValue+'.csv', 'a')
#Use csv Writer
csvWriter = csv.writer(csvFile)
for tweet in tweepy.Cursor(api.search,q=HashValue,count=20,lang="en",since=StartDate, tweet_mode='extended').items():
print (tweet.created_at, tweet.full_text)
csvWriter.writerow([tweet.created_at, tweet.full_text.encode('utf-8')])
print ("Scraping finished and saved to "+HashValue+".csv")
#sys.exit()
How can this be done.
-Hello- Rahul
As I understand it you are looking to get geo data off searched tweets rather then filter search based on geocode.
Here is a code sample with the relevant fields you are interested in. These may or may not be provided depending on the tweeters privacy settings.
Note there is no "since" parameter on the search API:
https://tweepy.readthedocs.io/en/latest/api.html#help-methods
https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets
Standard twitter api search goes back 7 days. The premium and enterprise APIs have 30 day search as well as Full Archive search, but you will pay $$$.
Unfortunately tweepy still hasn't had their models documented:
https://github.com/tweepy/tweepy/issues/720
So if you want to look at the tweet object you can use pprint package and run:
pprint(tweet.__dict__)
One difference I noticed was the "text" field in the JSON became "full_text" in the object.
There's also information on the original tweet in there if the one you found was a quote tweet, has the same info from what I could see.
Anyway here's the code, I added a max tweet count for looping through the cursor while I was testing to avoid blowing any API limits.
Let me know if you want csv code but it looks like you can handle that already.
import tweepy
# API credentials here
consumer_key = 'your-info'
consumer_secret = 'your-info'
access_token = 'your-info'
access_token_secret = 'your-info'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True)
searchString = "iPhone"
cursor = tweepy.Cursor(api.search, q=searchString, count=20, lang="en", tweet_mode='extended')
maxCount = 1
count = 0
for tweet in cursor.items():
print()
print("Tweet Information")
print("================================")
print("Text: ", tweet.full_text)
print("Geo: ", tweet.geo)
print("Coordinates: ", tweet.coordinates)
print("Place: ", tweet.place)
print()
print("User Information")
print("================================")
print("Location: ", tweet.user.location)
print("Geo Enabled? ", tweet.user.geo_enabled)
count = count + 1
if count == maxCount:
break;
Will output something like this:
Tweet Information
================================
Text: NowPlaying : Hashfinger - Leaving
https://derp.com
#iPhone free app https://derp.com
#peripouwebradio
Geo: None
Coordinates: None
Place: None
User Information
================================
Location: Greece
Geo Enabled? True
I'm very, very new to Python as disclosure.
I have successfully pulled all users who are members of a list on Twitter. I have also pulled all tweets of a user, based on screen name - both components contained below. How do I combine these, and pull all tweets of all users who are members of a list please? Is this even possible? Everything below:
#GOAL: pull all tweets from all users who are memberis of a list.
#imports necessary methods from Twitter library
import json
import tweepy
import time
import csv
import sys
#authorises twitter
CONSUMER_KEY = 'SECRET'
CONSUMER_SECRET = 'SECRET'
ACCESS_TOKEN = 'SECRET'
ACCESS_SECRET = 'SECRET'
#authorisations
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET)
api = tweepy.API(auth, wait_on_rate_limit=True)
#returns members of a list & some details on them
for user in tweepy.Cursor(api.list_members, slug="uk-mps-labour", owner_screen_name="tweetminster", include_entities=True).items():
print(f"{user.id}\t{user.screen_name}\t{user.name}\t{user.description}\t{user.location}\t{user.followers_count}\t{user.friends_count}\t{user.verified}")
#creates a loop to iterate through the list of user ids
#returns all tweets of a user
counter = 0 #establishes a counter to number tweets output
for status in tweepy.Cursor(api.user_timeline, screen_name="frogface", tweet_mode="extended").items():
counter = counter + 1
print(f"{counter}\t{status.user.id}\t{status.user.screen_name}\t{status.created_at}\t{status.full_text}")
When you are iterating through users of the list instead of printing the user details, add the screen_name to a list.
next iterate through the screen_names list and then get the users tweets. The code will look something like this:
screen_names = []
#returns members of a list & some details on them
for user in tweepy.Cursor(api.list_members, slug="uk-mps-labour", owner_screen_name="tweetminster", include_entities=True).items():
screen_names.append(f"{user.screen_name}")
for i in screen_names:
#returns all tweets of a user
counter = 0 #establishes a counter to number tweets output
for status in tweepy.Cursor(api.user_timeline, screen_name=i, tweet_mode="extended").items():
counter = counter + 1
print(f"{counter}\t{status.user.id}\t{status.user.screen_name}\t{status.created_at}\t{status.full_text}")
I have found a python script for extracting tweets and store to csv file. I am not familiar with python yet. Except the tweets, I need also to extract the date and the time of each tweet. I have found how to extract other characteristics, such as "retweeted", "retweet_count", but I am still stuck in date and time.
The script is here:
#!/usr/bin/env python
# encoding: utf-8
import tweepy #https://github.com/tweepy/tweepy
import csv
#Twitter API credentials
consumer_key = "..........................."
consumer_secret = "..........................."
access_key = "..........................."
access_secret = "..........................."
screename = "#realDonaldTrump"
def get_all_tweets(screen_name):
#Twitter only allows access to a users most recent 3240 tweets with this method
#authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret )
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
#initialize a list to hold all the tweepy Tweets
alltweets = []
#make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name = screename ,count=200)
screen_name = "Donald J. Trump"
#save most recent tweets
alltweets.extend(new_tweets)
#save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:
print "getting tweets before %s" % (oldest)
#all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name = screename,count=200,max_id=oldest)
#save most recent tweets
alltweets.extend(new_tweets)
#update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
print "...%s tweets downloaded so far" % (len(alltweets))
#transform the tweepy tweets into a 2D array that will populate the csv
outtweets = [[tweet.id_str, tweet.created_at, tweet.text.encode("utf-8"), tweet.favorite_count, tweet.retweet_count, tweet.favorited, tweet.retweeted] for tweet in alltweets]
#write the csv
with open('%s_tweets.csv' % screen_name , 'wb') as f:
writer = csv.writer(f)
writer.writerow(["id","created_at","text","favorite_count","retweet_count","favorited","retweeted"])
writer.writerows(outtweets)
pass
if __name__ == '__main__':
#pass in the username of the account you want to download
get_all_tweets(screename)
The tweepy tweet model has created_at:
created_at
Creation time of the Tweet.
Type
datetime.datetime | None
Interesting fact is that you can derive the time from the tweet id. Tweet IDs are k-sorted within a second bound. We can extract the timestamp for a tweet ID by right shifting the tweet ID by 22 bits and adding the Twitter epoch time of 1288834974657.