I'm creating a bot that posts every 60 minutes covid numbers, I have it finished but idk how to make it repeat. Any idea? It's a little project that I have in mind and it's the last thing I have to do. (if you answer the code from the publication with inside the solution it would be very cool)
import sys
CONSUMER_KEY = 'XXXX'
CONSUMER_SECRET = 'XXXX'
ACCESS_TOKEN = 'XXXX'
ACCESS_TOKEN_SECRET = 'XXXX'
import tweepy
import requests
from lxml import html
def create_tweet():
response = requests.get('https://www.worldometers.info/coronavirus/')
doc = html.fromstring(response.content)
total, deaths, recovered = doc.xpath('//div[#class="maincounter-number"]/span/text()')
tweet = f'''Coronavirus Latest Updates
Total cases: {total}
Recovered: {recovered}
Deaths: {deaths}
Source: https://www.worldometers.info/coronavirus/
#coronavirus #covid19 #coronavirusnews #coronavirusupdates #COVID19
'''
return tweet
if __name__ == '__main__':
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
# Create API object
api = tweepy.API(auth)
try:
api.verify_credentials()
print('Authentication Successful')
except:
print('Error while authenticating API')
sys.exit(5)
tweet = create_tweet()
api.update_status(tweet)
print('Tweet successful')
You can simply add this statement at the end of the code
sleep(3600)
If you want it to run endlessly, you can do it like this:
while True:
insert your main code here
sleep(3600)
You might want to use a scheduler, there is already a built-in one in Python (sched), read more about it here: https://docs.python.org/3/library/sched.html
You have to use a timer like this one below.
(The function refresh refers to itself every hour)
from threading import Timer
def refresh(delay_repeat=3600): # 3600 sec for one hour
# Your refresh script here
# ....
# timer
Timer(delay_repeat, refresh, (delay_repeat, )).start()
Have a look at my runnable refresh graph here to have an idea :
How can I dynamically update my matplotlib figure as the data file changes?
You can use the task scheduler in Python. "schedule" is a library in python which you can install using pip or conda. This Library allows you rerun a function after a given interval (daily, hourly weekly etc).
First you need to install the library using pip
pip install schedule
Secondly, Put your code in a function. For e.g:
def theCode():
print("Do Something")
Third, set the schedule:
schedule.every(2).hours.do(theCode)
while 1:
schedule.run_pending()
time.sleep(10)
Related
A little help please. This is what I am working with now (I got it from here on stackoverflow) and it works very well, but it seems to only work with the most recent accounts in the list of accounts that don't follow me back. I want to start unfollowing accounts from the oldest to the newest because I keep reaching the limit of the API. I thought to make a list of followers and reverse it then plug that in somewhere but not quite sure how to do that or what the syntax would be. Thanks in advance.
import tweepy
from cred import *
from config import QUERY, UNFOLLOW, FOLLOW, LIKE, RETWEET
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
def main():
try:
if UNFOLLOW:
my_screen_name = api.get_user(screen_name='YOUR_SCREEN_NAME')
for follower in my_screen_name.friends():
Status = api.get_friendship(source_id = my_screen_name.id , source_screen_name = my_screen_name.screen_name, target_id = follower.id, target_screen_name = follower.screen_name)
if Status [0].followed_by:
print('{} he is following You'.format(follower.screen_name))
else:
print('{} he is not following You'.format(follower.screen_name))
api.destroy_friendship(screen_name = follower.screen_name)
except tweepy.errors.TweepyException as e:
print(e)
while True:
main()
here is the config.py file
#config.py
UNFOLLOW = True
I recently assembled some pieces of code together to reach this, so I'll just copy paste what I already have here instead of updating your code, but I can point out the main points (and give some tips).
The full code:
import tweepy
from cred import *
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
def unfollower():
followers = api.get_follower_ids(screen_name=api.verify_credentials().screen_name)
friends = api.get_friend_ids(screen_name=api.verify_credentials().screen_name)
print("You follow:", len(friends))
for friend in friends[::-1]:
if friend not in followers:
api.destroy_friendship(user_id = friend)
else:
pass
friends = api.friends_ids(screen_name=api.me().screen_name)
print("Now you're following:", len(friends))
unfollower()
Now what happened here and what is different from your code
This two variables:
followers = api.followers_ids(screen_name=api.verify_credentials().screen_name)
friends = api.friends_ids(screen_name=api.verify_credentials().screen_name)
create a list with the ids from both the followers (follow you) and the friends (you are following), now all we need to do is compare both.
There is a discussion about the Twitter Rate limit and how using cursors have a smaller rate than not using, but I'm not qualified to explain the whys, so let's just assume that if we do not want small rate limits, the best way is not to use requests that have a intrinsic small rate limit like the api.get_friendship and them getting the screen_name, instead I'm using the get_friend_ids method.
the next part involves what you called "make a list of followers and reverse", well the list is already there in the variable "followers", so all we need to do now is reverse read it with the following command:
for friend in friends[::-1]:
this says: "read each element of the list, starting from index -1" roughtly "read the list backwards".
Well, I think the major points are these, I created a function but you really don't even need to, is just easier to update this to a class if you need to, and this way you don't need to use the while True: main(), just call the function unfollow() and it will automatically end the script when the unfollows are over.
Now some minor points that might improve your code:
Instead of using
screen_name='YOUR_SCREEN_NAME'
That you need a config file or to hardcode the screen_name, you can use
screen_name=api.verify_credentials().screen_name
This way it will automatically knows that you want the authenticating user information (note that I didn't used this part on my code, for the get_friend_ids method does not need the screen_name)
Now this part
from cred import *
from config import QUERY, UNFOLLOW, FOLLOW, LIKE, RETWEET
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
First I've eliminated the need for the config file
and you can eliminate all the extra info that comes imported from the cred file, so you don't need to import all in from cred import * updating cred.py with:
import tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
and now you can only impor the api function with from cred import api, this way the code can become cleaner:
import tweepy
from cred import api
def unfollower():
followers = api.get_follower_ids(screen_name=api.verify_credentials().screen_name)
friends = api.get_friend_ids(screen_name=api.verify_credentials().screen_name)
print("You follow:", len(friends))
for friend in friends[::-1]:
if friend not in followers:
api.destroy_friendship(user_id = friend)
else:
pass
friends = api.get_friend_ids(screen_name=api.verify_credentials().screen_name)
print("Now you're following:", len(friends))
unfollower()
Lastly, if anyone is having problems with the api.get_friend_ids or get_follower_ids remember that the tweepy update for versions 4.x.x changed the name of some methods, the ones I remember are:
followers_ids is now get_follower_ids
friends_ids is now get_friend_ids
me() is now verify_credentials()
Well, I guest that's it, you can check the rest on the docs.
Happy pythoning!
(background)
I have an ERP application which is managed from a Weblogic Console. Recently we noticed that the same activities that we perform from the console can be performed using the vendor provided REST API calls. So we wanted to utilize this approach programatically and try to build some automations.
This is the page from where we can control one of the instance ConsoleImage
The same button acts as Stop and Start to manage the start and stop instance.
Both the start and stop have different API calls which makes sense.
The complete API doc is at : https://docs.oracle.com/cd/E61420_01/doc.92/e80710/smcrestapis.htm#BABFHBJI
(Now)
I wrote a program in python using the request method to call these APIs and it works fine.
The API response can take anywhere between 20 to 30 seconds when I use the stopInstance API
And normally takes 60 to 90 seconds when I use the startInstance API, but if there is an issue when starting the instance it takes more than 300 seconds and goes into indefinate wait.
My problem is, while starting an instance I want to wait maximum only for 100 seconds for the response. If it takes more than 100 seconds the program should display a message like "Instance was not able to start in 100 seconds"
This is my program. I am taking input from a text file and all the values present there have been verified.
import requests
import json
import importlib.machinery
import importlib.util
import numpy
import time
import sys
loader = importlib.machinery.SourceFileLoader('SM','sm_details.txt')
spec = importlib.util.spec_from_loader(loader.name, loader)
mod = importlib.util.module_from_spec(spec)
loader.exec_module(mod)
username = str(mod.username)
password = str(mod.password)
hostname = str(mod.servermanagerHostname)
portnum = str(mod.servermanagerPort)
instanceDetails = numpy.array(mod.instanceName)
authenticationAPI = "http://"+hostname+":"+portnum+"/manage/mgmtrestservice/authenticate"
startInstanceAPI = "http://"+hostname+":"+portnum+"/manage/mgmtrestservice/startinstance"
headers = {
'Content-Type':'application/json',
'Cache-Control':'no-cache',
}
data = {}
data['username']= username
data['password']= password
instanceNameDict = {'instanceName':''}
#Authentication request and storing token
response = requests.post(authenticationAPI, data=json.dumps(data), headers=headers)
token = response.headers['TOKEN']
head2 = {}
head2['TOKEN']=token
def start(instance):
print(f'\nTrying to start instance : '+instance['instanceName'])
startInstanceResponse = requests.post(startInstanceAPI,data=json.dumps(instance), headers=head2) #this is where the program is stuck and it does not move to the time.sleep step
time.sleep(100)
if startInstanceResponse.status_code == 200:
print('Instance '+instance['instanceName']+' started.')
else:
print('Could not start instance in 100 seconds')
sys.exit(1)
I would suggest you to use the timeout parameter in requests:
requests.post(startInstanceAPI,data=json.dumps(instance), headers=head2, timeout=100.0)
You can tell Requests to stop waiting for a response after a given
number of seconds with the timeout parameter. Nearly all production
code should use this parameter in nearly all requests. Failure to do
so can cause your program to hang indefinitely.
Source
Here's the requests timeout documentation, you will also find more details in there and Exception handling.
i'm collecting tweets withe thier replies from Twitter's API to build data set and i'm using tweepy library in python for that,but the problem is that I get this error so much (Rate limit reached. Sleeping for:(any number for sec)) that delays me and I have to collect as many data as possible in the shortest time
I read that twitter has a rate limit of i think 15 requests per 15 minutes or something like that, but on my situation I can only gather a tweet or two tweet until it stops again and sometimes it stops for 15 minutes and then stop again for 15 minutes without giving me give me time between them, I don't know what caused the problem whether it is my code or not?
# Import the necessary package to process data in JSON format
try:
import json
except ImportError:
import simplejson as json
# Import the tweepy library
import tweepy
import sys
# Variables that contains the user credentials to access Twitter API
ACCESS_TOKEN = '-'
ACCESS_SECRET = '-'
CONSUMER_KEY = '-'
CONSUMER_SECRET = '-'
# Setup tweepy to authenticate with Twitter credentials:
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET)
# Create the api to connect to twitter with your creadentials
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, compression=True)
file2 = open('replies.csv','w', encoding='utf-8-sig')
replies=[]
non_bmp_map = dict.fromkeys(range(0x10000, sys.maxunicode + 1), 0xfffd)
for full_tweets in tweepy.Cursor(api.search,q='#عربي',timeout=999999,tweet_mode='extended').items():
if (not full_tweets.retweeted) and ('RT #' not in full_tweets.full_text):
for tweet in tweepy.Cursor(api.search,q='to:'+full_tweets.user.screen_name,result_type='recent',timeout=999999,tweet_mode='extended').items(1000):
if hasattr(tweet, 'in_reply_to_status_id_str'):
if (tweet.in_reply_to_status_id_str==full_tweets.id_str):
replies.append(tweet.full_text)
print(full_tweets._json)
file2.write("{ 'id' : "+ full_tweets.id_str + "," +"'Replies' : ")
for elements in replies:
file2.write(elements.strip('\n')+" , ")
file2.write("}\n")
replies.clear()
file2.close()
$ python code.py > file.csv
Rate limit reached. Sleeping for: 262
Rate limit reached. Sleeping for: 853
Hoping this might help
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=False, compression=True)
Just add this line to the Python script to avoid the sleep:
sleep_on_rate_limit=False
i'm collecting tweets withe thier replies from Twitter's API to build data set and i'm using tweepy library in python for that,but the problem is that I get this error so much (Rate limit reached. Sleeping for:(any number for sec)) that delays me and I have to collect as many data as possible in the shortest time
I read that twitter has a rate limit of i think 15 requests per 15 minutes or something like that, but on my situation I can only gather a tweet or two tweet until it stops again and sometimes it stops for 15 minutes and then stop again for 15 minutes without giving me give me time between them, I don't know what caused the problem whether it is my code or not?
# Import the necessary package to process data in JSON format
try:
import json
except ImportError:
import simplejson as json
# Import the tweepy library
import tweepy
import sys
# Variables that contains the user credentials to access Twitter API
ACCESS_TOKEN = '-'
ACCESS_SECRET = '-'
CONSUMER_KEY = '-'
CONSUMER_SECRET = '-'
# Setup tweepy to authenticate with Twitter credentials:
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET)
# Create the api to connect to twitter with your creadentials
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, compression=True)
file2 = open('replies.csv','w', encoding='utf-8-sig')
replies=[]
non_bmp_map = dict.fromkeys(range(0x10000, sys.maxunicode + 1), 0xfffd)
for full_tweets in tweepy.Cursor(api.search,q='#عربي',timeout=999999,tweet_mode='extended').items():
if (not full_tweets.retweeted) and ('RT #' not in full_tweets.full_text):
for tweet in tweepy.Cursor(api.search,q='to:'+full_tweets.user.screen_name,result_type='recent',timeout=999999,tweet_mode='extended').items(1000):
if hasattr(tweet, 'in_reply_to_status_id_str'):
if (tweet.in_reply_to_status_id_str==full_tweets.id_str):
replies.append(tweet.full_text)
print(full_tweets._json)
file2.write("{ 'id' : "+ full_tweets.id_str + "," +"'Replies' : ")
for elements in replies:
file2.write(elements.strip('\n')+" , ")
file2.write("}\n")
replies.clear()
file2.close()
$ python code.py > file.csv
Rate limit reached. Sleeping for: 262
Rate limit reached. Sleeping for: 853
Hoping this might help
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=False, compression=True)
Just add this line to the Python script to avoid the sleep:
sleep_on_rate_limit=False
Hi !
I am experiencing some issue about tweepy library for Python. The first time I launched the below script, everything perfectly worked, and the second time... the script stop unexpectedly.
I did not found anything about this behavior, the Listener is stopping after few seconds, and I do not have any error code or something.
There is the simple code:
import tweepy
import sys
import json
from textwrap import TextWrapper
from datetime import datetime
from elasticsearch import Elasticsearch
consumer_key = "hidden"
consumer_secret = "hidden"
access_token = "hidden"
access_token_secret = "hidden"
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
ES_HOST = {"host" : "localhost", "port" : 9200}
es = Elasticsearch(hosts = [ES_HOST])
class StreamListener(tweepy.StreamListener):
print('Starting StreamListener')
status_wrapper = TextWrapper(width=60, initial_indent=' ', subsequent_indent=' ')
def on_status(self, status):
try:
print 'n%s %s' % (status.author.screen_name, status.created_at)
json_data = status._json
#print json_data['text']
es.create(index="idx_twp",
doc_type="twitter_twp_nintendo",
body=json_data
)
except Exception, e:
print e
pass
print('Starting Receiving')
streamer = tweepy.Stream(auth=auth, listener=StreamListener(), timeout=3000000000)
#Fill with your own Keywords bellow
terms = ['nintendo']
streamer.filter(None,terms)
#streamer.userstream(None)
print ('Ending program')
And then there is the ouput (only 2 seconds);
[root#localhost ~]# python projects/m/twitter/twitter_logs.py
Starting StreamListener
Starting Receiving
Ending program
I am using Python 2.7.5
Any ideas about ?
Hi !
I solved this weird issue by changing my Python version to 3.5 via virtualenv. For now, it works well.
This could was due to the python version, anyway if someone have this, I just recommend to use virtualenv to test another Python version, and see what happens.
FYI : I already opened issue #759 into the github project.