Essentially, I've written a program for a reddit bot designed to list down certain apsects of a reddit post, such as the title or poster, as long as they fit a certain criteria. I want it to be able to automatically run once every hour. I also want it to be able to make a post once every 7 days. Could someone share code for these please?
#!/usr/bin/python
import base64
import praw
#Enter your correct Reddit information into the variable below
userAgent = 'RRBot-Beta'
cID = 'Enter your so and so'
cSC = 'Enter your secret'
userN = 'Enter your Reddit username'
userP = 'Enter your Reddit password'
unfilled_post_URL = [""]
unfilled_post_url_B64 = [""]
submission_title_and_poster = {}
filled_requests = 0
unfilled_requests = 0
requests = 0
reddit = praw.Reddit(user_agent=userAgent,
client_id=cID,
client_secret=cSC,
username=userN,
password=userP)
subreddit = reddit.subreddit('riprequestsnew') #any subreddit you want to monitor
title_keywords = {'requests', 'request'} #makes a set of keywords to find in subreddits
comment_keyword = "share"
for submission in subreddit:
lowercase_title = submission.title.lower() #makes the post title lowercase so we can compare our keywords with it.
for title_keyword in title_keywords: #goes through our keywords
if title_keyword in lowercase_title: #if one of our keywords matches a title in the subreddit
requests = requests + 1 #tracks the number of requests
for post in requests:
comments = subreddit.submission.comment.lower() #makes the comment text lowercase
if comment_keyword in comments: #checks the comment text for our keyword
filled_requests += 1 #if someone has shared something, this post will be marked as filled
elif comment_keyword not in comments: #if no one has shared anything, the post url will be added to a list
submission_title_and_poster.update({subreddit.submission.title: subreddit.submission.name})
unfilled_post_URL.append(subreddit.submission.url)
for url in unfilled_post_URL: #B64 encodes each url and adds it to a new list
text = open(url, "rb")
text_read = text.read()
url_encoded = base64.encodestring(text_read)
unfilled_post_url_B64.append(url_encoded)
unfilled_requests += 1
Schedule (https://pypi.python.org/pypi/schedule) seemed to be what you need.
You will have to install their Python library:
pip install schedule
then modify the sample script :
import schedule
import time
def job():
schedule.every(10).seconds.do(job)
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
schedule.every(5).to(10).minutes.do(job)
schedule.every().monday.do(job)
schedule.every().wednesday.at("13:15").do(job)
schedule.every().minute.at(":17").do(job)
while True:
schedule.run_pending()
time.sleep(1)
With your own function in job() and use the needed call for your timing.
Then you can run it with nohup.
Be advised that yu will need to start it again if you reboot.
here is the docs
Related
So I don't know how to properly ask this question so it might seem kind of off, sorry about that.
I made a telegram bot that gets some images from a website and send it to your chat. However when an user calls the command the photos are also sent to the other users that have started the bot.
For instance, If User A calls the command to get the photos the bot will send it to him as well as to User B, User C and User D, all stacking together as if it were a single call to everyone using the bot
import requests
import os
from tqdm import tqdm
from bs4 import BeautifulSoup as bs
from urllib.parse import urljoin, urlparse
import re
import telebot
API_KEY = os.getenv("API_KEY")
bot = telebot.TeleBot(API_KEY)
url_mainpage = "https://url.com"
soup = bs(requests.get(url_mainpage).content, "html.parser")
full_link = soup.find("h5", class_="elementor-image-box-title")
selectlist = full_link.select(".elementor-image-box-title a")
for a in selectlist:
global lastchapterlink
lastchapterlink = a['href']
images = []
stripped_images = []
def download_last_chapter():
soup = bs(requests.get(lastchapterlink).content, "html.parser")
images_link = soup.findAll("img")
for img in images_link:
images.append(img.get("src"))
for link in images:
stripped_images.append(link.strip())
print(stripped_images)
#bot.message_handler(commands=["last_chapter"])
def send_images(message):
download_last_chapter()
for telegram_send in stripped_images:
try:
bot.send_photo(message.chat.id, photo = telegram_send)
except:
None
bot.polling()
this is the part of the code containing the bot
Per the API documentation, the bot will reply in whatever channel it sees the message in. Are your users DMing it, or posting in a shared channel that you're all part of? Also, you're not clearing stripped_images between calls-- you're just appending the new images to it.
I'm using Spotipy and LyricsGenius to open lyrics on a web browser from a terminal.
I can open a url for one song, but have to run the script each time to run consecutively. What are some ways to detect the end of a song using Spotipy?
import spotipy
import webbrowser
import lyricsgenius as lg
...
# Create our spotifyObject
spotifyObject = spotipy.Spotify(auth=token)
# Create out geniusObject
geniusObject = lg.Genius(access_token)
...
while True:
currently_playing = spotifyObject.currently_playing()
artist = currently_playing['item']['artists'][0]['name']
title = currently_playing['item']['name']
search_query = artist + " " + title
# if (currently_playing has changed):
song = geniusObject.search_songs(search_query)
song_url = song['hits'][0]['result']['url']
webbrowser.open(song_url)
webbrowser.open(song_url)
I was reading relevant threads such as this, this, and read through documentation but could not find an answer to my question if this could be handled by Spotipy. I would appreciate any suggestions, thank you.
I used time.sleep(length) with the argument 'length' standing for the remaining duration of a current track.
I am just writing a simply 'grab followers' script in Python, using tweepy. When I run the script, everything works fine. Does what it needs to. Though I am being rate limited very heavily. Almost instantly, it seems.
I run other scripts through tweepy, hell I've scraped nearly 800 accounts' tweets before being rate limited previously. Multiple times even.
Can someone shed some light on this? My account was even suspended, temporarily, last night for simply trying to let it finish :-\
import tweepy
APP_KEY = ''
APP_SECRET = ''
result = []
auth = tweepy.AppAuthHandler(APP_KEY, APP_SECRET)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
if not api:
print "Didn't Authenticate!"
sys.exit(-1)
def get_followers(screen_name, filename):
result = []
for page in tweepy.Cursor(api.followers_ids, screen_name=screen_name).pages():
result.extend(page)
write_to = open(filename, 'w')
for num in result:
name = api.get_user(num)
write_to.write(str(name.screen_name) + '\n')
write_to.write(str(len(result)))
write_to.close()
user_input = raw_input('Please enter Twitter name to get followers, or hit enter to use default file:')
if len(user_input) == 0:
user_input = 'names.txt'
for name in user_input:
file_name = str(name) + '.txt'
get_followers(name, file_name)
else:
file_name = str(user_input) + '.txt'
get_followers(user_input, file_name)
You are using the followers/id endpoint. The rate limit is 15 requests in a 15 minute window. See docs here. Try making the request once every minute and you should be fine.
import praw
import time
from selenium import webdriver
driver = webdriver.Chrome()
r = praw.Reddit(client_id='XXXXXXXXXX',
client_secret='XXXXXXXXXXXXX', password='XXXXXXXXX',
user_agent='Grand Street Tech', username='grandstreetsupreme')
subreddit = r.subreddit('supremeclothing')
submissions = []
users = []
for submission in r.subreddit('supremeclothing').new(limit=999):
for comment in submission.comments:
author = comment.author
users.append(author)
It takes like 10 minutes to complete and when it does it doesn't do anything.
There is no print statement for the users right, put a statement like below.
print users
This is because you just created the list users, you need to tell python to print it.
After your for loop, put print users
I am new to Facebook API. Currently, I am trying to print out ALL the comments that have been posted for this facebook page called 'leehsienloong'. However, I could only print out a total of 700+ comments. I'm sure there are more than 700+ comments in total.
I find out that the problem is, I did not request to go to another page to print out the comments. I read about paging Facebook API, but I still do not understand how to do the code for paging.
Is there anyone out there who will be able to help/assist me? I really need help. Thank you.
Here is my code, without paging:
import facebook #sudo pip install facebook-sdk
import itertools
import json
import re
import requests
access_token = "XXX"
user = 'leehsienloong'
graph = facebook.GraphAPI(access_token)
profile = graph.get_object(user)
posts = graph.get_connections(profile['id'], 'posts')
Jstr = json.dumps(posts)
JDict = json.loads(Jstr)
count = 0
for i in JDict['data']:
allID = i['id']
try:
allComments = i['comments']
for a in allComments['data']:
count += 1
print a['message']
except (UnicodeEncodeError):
pass
print count
You can use the limit parameter to increase the number of comments to be fetched. The default is 25. You can increase it like this:
posts = graph.get_connections(profile['id'], 'posts', limit=100)
But more convenient way would be get the previous and next pages from paging and do multiple requests.
to get all the comments of a post the logic should be something like
comments = []
for post in posts["data"]:
first_comments = graph.get_connections(id=post["id"], connection_name="comments")
comments.extend(first_comments["data"])
while True:
try:
next_comments = requests.get(post_comments["paging"]["next"]).json()
comments.extend(next_comments["data"])
except KeyError:
break