I've been trying to learn how to make a twitter bot and I made a simple search and reply bot, it searches for whenever someone tweets my name and it replies, however my bot has been replying to the same tweets over and over again. I tried to add a since variable in the loop that updates whenever I reply to a tweet by said tweet's ID then I refined the search query with a since_id = since but that did nothing, any suggestions ? here's my code :
import os
import random
import tweepy
consumer_key = os.environ.get('TWITTER_CONSUMER_KEY')
consumer_secret = os.environ.get('TWITTER_CONSUMER_SECRET')
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
access_token = os.environ.get('TWITTER_ACCESS_TOKEN')
access_token_secret = os.environ.get('TWITTER_ACCESS_TOKEN_SECRET')
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
replies = ["example 1",
"example 2",
"example 3"
]
if __name__ == "__main__":
twitter = TwitterAPI()
since = 0
while True:
tweets = api.search(q="shalash -shalashtein -OsamaAhmadSalas -ShalashteinBot -ghghgu38678638 -athy_shalash",since_id=since) #I added these exclusions because the bot was searching screen names too
for s in tweets:
since = s.id
rand = random.choice(replies)
sn = s.user.screen_name
m = "#%s %s" % (sn,rand)
s = api.update_status(m, s.id)
time.sleep(60)
Save the tweets you've answered to to a DB or in a text file, then just check if you already answered it.
Related
I'm running the below code that was given to me by an instructor to grab the status based off the tweet_id in another dataframe I've imported already. When running the code, everything comes back Failed. I don't receive any errors so I'm not sure what I'm missing. When I requested my twitter developer access I didn't have to answer a ton of questions like I've seen other people say they've had to do, so I'm curious if it's just not enough access?
import tweepy
from tweepy import OAuthHandler
import json
from timeit import default_timer as timer
# Query Twitter API for each tweet in the Twitter archive and save JSON in a text file
# These are hidden to comply with Twitter's API terms and conditions
consumer_key = 'HIDDEN'
consumer_secret = 'HIDDEN'
access_token = 'HIDDEN'
access_secret = 'HIDDEN'
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
# NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES:
# df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to
# change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv
# NOTE TO REVIEWER: this student had mobile verification issues so the following
# Twitter API code was sent to this student from a Udacity instructor
# Tweet IDs for which to gather additional data via Twitter's API
tweet_ids = twitter_archive.tweet_id.values
len(tweet_ids)
# Query Twitter's API for JSON data for each tweet ID in the Twitter archive
count = 0
fails_dict = {}
start = timer()
# Save each tweet's returned JSON as a new line in a .txt file
with open('tweet_json.txt', 'w') as outfile:
# This loop will likely take 20-30 minutes to run because of Twitter's rate limit
for tweet_id in tweet_ids:
count += 1
print(str(count) + ": " + str(tweet_id))
try:
tweet = api.get_status(tweet_id, tweet_mode='extended')
print("Success")
json.dump(tweet._json, outfile)
outfile.write('\n')
except tweepy.TweepError as e:
print("Fail")
fails_dict[tweet_id] = e
pass
end = timer()
print(end - start)
print(fails_dict)
I was just wondering if anyone knew how to list out the usernames that a twitter user is following, and their followers in two separate .csv cells.
This is what I have tried so far.
import tweepy
import csv
consumer_key = ""
consumer_secret = ""
access_token = ""
access_token_secret = ""
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
csvFile = open('ID.csv', 'w')
csvWriter = csv.writer(csvFile)
users = ['AindriasMoynih1', 'Fiona_Kildare', 'daracalleary', 'CowenBarry', 'BillyKelleherTD', 'BrendanSmithTD']
for user_name in users:
user = api.get_user(screen_name = user_name, count=200)
csvWriter.writerow([user.screen_name, user.id, user.followers_count, user.followers_id, user.friends_id user.description.encode('utf-8')])
print (user.id)
csvFile.close()
Tweepy is a wrapper around the Twitter API.
According to the Twitter API documentation, you'll need to call the GET friends/ids to get a list of their friends (people they follow), and GET followers/ids to get their followers.
Using the wrapper, you'll invoke those API calls indirectly by calling the corresponding method in Tweepy.
Since there will be a lot of results, you should use the Tweepy Cursor to handle scrolling through the pages of results for you.
Try the code below. I'll leave it to you to handle the CSV aspect, and to apply it to multiple users.
import tweepy
access_token = "1234"
access_token_secret = "1234"
consumer_key = "1234"
consumer_secret = "1234"
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
for user in tweepy.Cursor(api.get_friends, screen_name="TechCrunch").items():
print('friend: ' + user.screen_name)
for user in tweepy.Cursor(api.get_followers, screen_name="TechCrunch").items():
print('follower: ' + user.screen_name)
New here, First post aswell.
I'm currently trying to use Tweepy. I've successfully set it up so far and I'm able to tweet single images. So the code runs fine.
The purpose of this is because I run an account that tweets images only, no actual text tweets.
I've a folder of 100's of images I go through everyday to tweet and found out about tweepy, Is it possible to be able to tell Tweepy to go into the folder of the images and select 1 or any 1 at random? I've did extensive searching and couldn't find anything at all.
All help is greatly, greatly appreciated!
Here's the code I've got at the moment (python-2).
import tweepy
from time import sleep
consumer_key = 'Removed'
consumer_secret = 'Removed'
access_token = 'Removed'
access_token_secret = 'Removed'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
api.update_with_media('Image')
sleep(900)
print 'Tweeted!'
I'm assuming that you're iterating 100 times, given that you have 100 photos in your dir. I hope you don't mind, I took the liberty of placing your twitter api instantiation/auth in a function (for reusability's sake :) ). For the getPathsFromDir() function, I adapted GoToLoop's solution from processing.org. You might want to check out the link reference/link for more details. Also, practice placing your api.update[_with_media,_status]() in try - except blocks. You'll never know an odd exception would be raised by the api. I hope my implementation works for you!
import tweepy
from time import sleep
folderpath = "/path/to/your/directory/"
def tweepy_creds():
consumer_key = 'Removed'
consumer_secret = 'Removed'
access_token = 'Removed'
access_token_secret = 'Removed'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
return tweepy.API(auth)
def getPathsFromDir(dir, EXTS="extensions=,png,jpg,jpeg,gif,tif,tiff,tga,bmp"):
return this.listPaths(folder, EXTS)
def tweet_photos(api):
imagePaths = getPathsFromDir(this.dataPath(folderpath))
for x in imagePaths:
status = "tweet text here"
try:
api.update_with_media(filename=x,status=status)
print "Tweeted!"
sleep(900)
except Exception as e:
print "encountered error! error deets: %s"%str(e)
break
if __name__ == "__main__":
tweet_photos(tweepy_creds())
/ogs
I want to get tweets from specific geo-location, so i used tweepy (api.search) method. I am successfully getting tweets from the specific geo location. but I am not getting screen name or userid. I am expecting that show_user when True will return me the screen name, who is posting that tweet. but i am not getting any screen name. so please tell me the solution how can i get the screen name who is posting that tweet by given geo location. Using tweepy.
public_tweets = tweepy.Cursor(api.search, rpp=100,
geocode="44.269493,63.341332,5km",since="2018-05-01",show_user =
"True",tweet_mode="extended").items()
rpp is deprecated, use "count" instead (see https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets.html).
This works with "100km" ("5km" returned no result).
consumer_key = '***'
consumer_secret = '***'
access_token = '***'
access_token_secret = '***'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
tweets = api.search(count=100,geocode="44.269493,63.341332,5km",since="2018-05-01")
for tweet in tweets:
print(tweet.user.id_str, tweet.user.screen_name, tweet.text)
Im doing Twitter sentiment research at the moment. For this reason, I'm using the Twitter API to download all tweets on certain keywords. But my current code is taking a lot of time to create a large datafile, so I was wondering if there's a faster method.
This is what Im using right now:
__author__ = 'gerbuiker'
import time
#Import the necessary methods from tweepy library
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
#Variables that contains the user credentials to access Twitter API
access_token = "XXXXXXXXXXXXX"
access_token_secret = "XXXXXXXX"
consumer_key = "XXXXX"
consumer_secret = "XXXXXXXXXXXXXX"
#This is a basic listener that just prints received tweets to stdout.
class StdOutListener(StreamListener):
def on_data(self, data):
try:
#print data
tweet = data.split(',"text":"')[1].split('","source')[0]
print tweet
saveThis = str(time.time())+ '::'+ tweet #saves time+actual tweet
saveFile = open('twitiamsterdam.txt','a')
saveFile.write(saveThis)
saveFile.write('\n')
saveFile.close()
return True
except BaseException, e:
print 'failed ondata,',str(e)
time.sleep(5)
def on_error(self, status):
print status
if __name__ == '__main__':
#This handles Twitter authetification and the connection to Twitter Streaming API
l = StdOutListener()
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
stream = Stream(auth, l)
#This line filter Twitter Streams to capture data by the keywords: 'Amsterdam'
stream.filter(track=['KEYWORD which i want to check'])
This gets me about 1500 tweets in one hour, for a pretty popular keyword (Amsterdam). Does anyone now a faster method in Python?
To be clear: I want to download all tweets on a certain subject for last month/year for example. So the newest tweets don't have to keep coming in, the most recent ones for a period would be sufficient. Thanks!
I need something similar to this for an academic research.
We're you able to fix it?
Would it be possible to specify a custom range of time from which to pull the data?
Sorry for asking here, but couldn't send you private messages.