So I don't know how to properly ask this question so it might seem kind of off, sorry about that.
I made a telegram bot that gets some images from a website and send it to your chat. However when an user calls the command the photos are also sent to the other users that have started the bot.
For instance, If User A calls the command to get the photos the bot will send it to him as well as to User B, User C and User D, all stacking together as if it were a single call to everyone using the bot
import requests
import os
from tqdm import tqdm
from bs4 import BeautifulSoup as bs
from urllib.parse import urljoin, urlparse
import re
import telebot
API_KEY = os.getenv("API_KEY")
bot = telebot.TeleBot(API_KEY)
url_mainpage = "https://url.com"
soup = bs(requests.get(url_mainpage).content, "html.parser")
full_link = soup.find("h5", class_="elementor-image-box-title")
selectlist = full_link.select(".elementor-image-box-title a")
for a in selectlist:
global lastchapterlink
lastchapterlink = a['href']
images = []
stripped_images = []
def download_last_chapter():
soup = bs(requests.get(lastchapterlink).content, "html.parser")
images_link = soup.findAll("img")
for img in images_link:
images.append(img.get("src"))
for link in images:
stripped_images.append(link.strip())
print(stripped_images)
#bot.message_handler(commands=["last_chapter"])
def send_images(message):
download_last_chapter()
for telegram_send in stripped_images:
try:
bot.send_photo(message.chat.id, photo = telegram_send)
except:
None
bot.polling()
this is the part of the code containing the bot
Per the API documentation, the bot will reply in whatever channel it sees the message in. Are your users DMing it, or posting in a shared channel that you're all part of? Also, you're not clearing stripped_images between calls-- you're just appending the new images to it.
Related
http://www.youtube.com/dev/ ive already checked the youtube api dev info and have not found anything pertaining to this.
Obtain API youtube
To be able to make requests of this type you must have an API key,
Go to link https://console.developers.google.com/
Create a new project
find youtube Data API v3 and click on it
Enable the API
go to credentials and create one for the project
write it down and insert it in the script below
This script uses ÁPI created previously to make requests on the channels by creating name in a random way and inserts the data in two files in the first it stores all the info in the second only the id, the channel name and the link to the channel, I hope it is what you are looking for ;)
import json
import urllib.request
import string
import random
channels_to_extract = 100
API_KEY = '' #your api key
while True:
random_name = ''.join(random.choice(string.ascii_uppercase) for _ in range(random.randint(3,10))) # for random name of channel to search
urlData = "https://www.googleapis.com/youtube/v3/search?key={}&maxResults={}&part=snippet&type=channel&q={}".format(API_KEY,channels_to_extract,random_name)
webURL = urllib.request.urlopen(urlData)
data = webURL.read()
encoding = webURL.info().get_content_charset('utf-8')
results = json.loads(data.decode(encoding))
results_id={}
if results['pageInfo']["totalResults"]>=channels_to_extract: # may return 0 result because is a random name
break # when find a result break
for result in results['items']:
results_id[result['id']['channelId']]=[result["snippet"]["title"],'https://www.youtube.com/channel/'+result['id']['channelId']] # get id and link of channel for all result
with open("all_info_channels.json","w") as f: # write all info result in a file json
json.dump(results,f,indent=4)
with open("only_id_channels.json","w") as f: # write only id of channels result in a file json
json.dump(results_id,f,indent=4)
for channelId in results_id.keys():
print('Link --> https://www.youtube.com/channel/'+channelId) # link at youtube channel for all result
I was trying to create an instagram post downloader bot with python:
import requests
import re
#get url's detail
def get_response(url):
r = requests.get(url)
while r.status_code != 200:
r = requests.get(url)
return r.text
def prepare_urls(matches):
return list({match.replace("\\u0026", "&") for match in matches})
url = input('Enter Instagram URL: ')
response = get_response(url)
#check if there is video url or picture url in the json webpage that is opened
vid_matches = re.findall('"video_url":"([^"]+)"', response)
pic_matches = re.findall('"display_url":"([^"]+)"', response)
vid_urls = prepare_urls(vid_matches)
pic_urls = prepare_urls(pic_matches)
if vid_urls:
print('Detected Videos:\n{0}'.format('\n'.join(vid_urls)))
if pic_urls:
print('Detected Pictures:\n{0}'.format('\n'.join(pic_urls)))
if not (vid_urls or pic_urls):
print('Could not recognize the media in the provided URL.')
After I finished the code, I tried it with a video link and it worked . After 1 hour I tried the same video link but it prints third condition :"Could not recognize the media in the provided URL.".
I'm confused . As you see , I never used my login credentials in the code but first time it works and second time not works...
Any ideas?
Make it so that each URL ends with the string "?__a=1" (When I have some free time, I'll edit this post and add the exact command to append the string to the URL's end.)
For example, instead of:
https://www.instagram.com/p/CECsuu2BgXj/
It should be:
https://www.instagram.com/p/CECsuu2BgXj/?__a=1
Output:
Detected Videos:
https://instagram.fdet1-2.fna.fbcdn.net/v/t50.2886-16/117817389_1889475617843249_1329686959743847420_n.mp4?efg=eyJ2ZW5jb2RlX3RhZyI6InZ0c192b2RfdXJsZ2VuLjcyMC5jbGlwcy5kZWZhdWx0IiwicWVfZ3JvdXBzIjoiW1wiaWdfd2ViX2RlbGl2ZXJ5X3Z0c19vdGZcIl0ifQ&_nc_ht=instagram.fdet1-2.fna.fbcdn.net&_nc_cat=105&_nc_ohc=OZRYx-3yUoAAX-b1xzZ&edm=AABBvjUBAAAA&vs=17858436890092651_3299599943&_nc_vs=HBksFQAYJEdDM0FCUWN4YUFQVGQ3WUdBUHhMQUxJXy0zTVNicV9FQUFBRhUAAsgBABUAGCRHQ0hOQ2dkbFlrcEYwOWtDQUtHQ0RqWUV4cGdzYnFfRUFBQUYVAgLIAQAoABgAGwAVAAAm1onK7OqJuT8VAigCQzMsF0AkmZmZmZmaGBJkYXNoX2Jhc2VsaW5lXzFfdjERAHX%2BBwA%3D&ccb=7-4&oe=6200F187&oh=00_AT-WTSxaoeTOd_GO0gMtqSqkgRXtxibffFG5pJGyCOPTNQ&_nc_sid=83d603
Detected Pictures:
https://instagram.fdet1-1.fna.fbcdn.net/v/t51.2885-15/e35/117915347_192544875567579_944852773653606759_n.jpg?_nc_ht=instagram.fdet1-1.fna.fbcdn.net&_nc_cat=103&_nc_ohc=0Bdvog7HWe8AX-3vsql&edm=AABBvjUBAAAA&ccb=7-4&oh=00_AT_O33BzV3tCKaDp_9eqeBUiYgyzVguImltLTuPIPKP4hg&oe=6201035F&_nc_sid=83d603
For more info, check out this awesome post.
Essentially, I've written a program for a reddit bot designed to list down certain apsects of a reddit post, such as the title or poster, as long as they fit a certain criteria. I want it to be able to automatically run once every hour. I also want it to be able to make a post once every 7 days. Could someone share code for these please?
#!/usr/bin/python
import base64
import praw
#Enter your correct Reddit information into the variable below
userAgent = 'RRBot-Beta'
cID = 'Enter your so and so'
cSC = 'Enter your secret'
userN = 'Enter your Reddit username'
userP = 'Enter your Reddit password'
unfilled_post_URL = [""]
unfilled_post_url_B64 = [""]
submission_title_and_poster = {}
filled_requests = 0
unfilled_requests = 0
requests = 0
reddit = praw.Reddit(user_agent=userAgent,
client_id=cID,
client_secret=cSC,
username=userN,
password=userP)
subreddit = reddit.subreddit('riprequestsnew') #any subreddit you want to monitor
title_keywords = {'requests', 'request'} #makes a set of keywords to find in subreddits
comment_keyword = "share"
for submission in subreddit:
lowercase_title = submission.title.lower() #makes the post title lowercase so we can compare our keywords with it.
for title_keyword in title_keywords: #goes through our keywords
if title_keyword in lowercase_title: #if one of our keywords matches a title in the subreddit
requests = requests + 1 #tracks the number of requests
for post in requests:
comments = subreddit.submission.comment.lower() #makes the comment text lowercase
if comment_keyword in comments: #checks the comment text for our keyword
filled_requests += 1 #if someone has shared something, this post will be marked as filled
elif comment_keyword not in comments: #if no one has shared anything, the post url will be added to a list
submission_title_and_poster.update({subreddit.submission.title: subreddit.submission.name})
unfilled_post_URL.append(subreddit.submission.url)
for url in unfilled_post_URL: #B64 encodes each url and adds it to a new list
text = open(url, "rb")
text_read = text.read()
url_encoded = base64.encodestring(text_read)
unfilled_post_url_B64.append(url_encoded)
unfilled_requests += 1
Schedule (https://pypi.python.org/pypi/schedule) seemed to be what you need.
You will have to install their Python library:
pip install schedule
then modify the sample script :
import schedule
import time
def job():
schedule.every(10).seconds.do(job)
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
schedule.every(5).to(10).minutes.do(job)
schedule.every().monday.do(job)
schedule.every().wednesday.at("13:15").do(job)
schedule.every().minute.at(":17").do(job)
while True:
schedule.run_pending()
time.sleep(1)
With your own function in job() and use the needed call for your timing.
Then you can run it with nohup.
Be advised that yu will need to start it again if you reboot.
here is the docs
I'm using Spotipy and LyricsGenius to open lyrics on a web browser from a terminal.
I can open a url for one song, but have to run the script each time to run consecutively. What are some ways to detect the end of a song using Spotipy?
import spotipy
import webbrowser
import lyricsgenius as lg
...
# Create our spotifyObject
spotifyObject = spotipy.Spotify(auth=token)
# Create out geniusObject
geniusObject = lg.Genius(access_token)
...
while True:
currently_playing = spotifyObject.currently_playing()
artist = currently_playing['item']['artists'][0]['name']
title = currently_playing['item']['name']
search_query = artist + " " + title
# if (currently_playing has changed):
song = geniusObject.search_songs(search_query)
song_url = song['hits'][0]['result']['url']
webbrowser.open(song_url)
webbrowser.open(song_url)
I was reading relevant threads such as this, this, and read through documentation but could not find an answer to my question if this could be handled by Spotipy. I would appreciate any suggestions, thank you.
I used time.sleep(length) with the argument 'length' standing for the remaining duration of a current track.
Im using a discord bot for a small community and im trying to display the number of online players on a specific game. The code im using here to me looks ok, but this is my first time scraping, so i may be asking for it to look for the wrong keywords. The module loads fine with no errors, but when entering the trigger to display the information, nothing happens. can anyone point out to me wheat i may have missed or input wrong myself
Here is the code:
import discord
from discord.ext import commands
try: # check if BeautifulSoup4 is installed
from bs4 import BeautifulSoup
soupAvailable = True
except:
soupAvailable = False
import aiohttp
class bf1online:
"""My custom cog that does stuff!"""
def __init__(self, bot):
self.bot = bot
"""This does stuff!"""
#Your code will go here
#commands.command()
async def bf1(self):
"""How many players are online atm?"""
#Your code will go here
url = "http://bf1stats.com/" #build the web adress
async with aiohttp.get(url) as response:
soupObject = BeautifulSoup(await response.text(), "html.parser")
try:
online = soupObject.find(id_='online_section').find('h2').find('p').find('b').get_text()
await self.bot.say(online + ' players are playing this game at the moment')
except:
await self.bot.say("Couldn't load amount of players. No one is playing this game anymore or there's an error.")
def setup(bot):
bot.add_cog(bf1online(bot))
Your first problem is it should be id= not id_=, no trailing underscore.
soupObject.find(id='online_section')
The next problem is that looks like:
<div id="online_section">
Loading currently playing player counts...
</div>
Because that is rendered using Js. Luckily you can mimic the ajax call that gets the data quite easily:
In [1]: import requests
...: data =
requests.get("http://api.bf1stats.com/api/onlinePlayers").json()
...:
In [2]: data
Out[2]:
{'pc': {'count': 126870, 'label': 'PC', 'peak24': 179935},
'ps4': {'count': 237504, 'label': 'PS4', 'peak24': 358182},
'xone': {'count': 98474, 'label': 'XBOXONE', 'peak24': 266869}}