This is my code, I'm trying to download a playlist from youtube.In order to prevent the app from taking too long, I use threading. But it is taking too long and I'm literally not getting any reaction form the window. Simply it's not working.Does anyone know what the problem is? (Note that I am new to python).Thank you in advance.
Here is my code:
def playlist_search():
global pl
url = playlist_link_entry.get()
try:
#Create a PlayList object
#Downloading is gonna happen with pytube
pl = Playlist(url)
except:
messagebox.showerror("Error","Either link is not functioning or connection broken".title())
#Gotta get the links to every single video
#We name it 'video_link_playlist'
#for that specific url we do our shit(get itag, size and ... thats exactly our previous code)
#We map the playlist_actual_search to pl.video_urls
with concurrent.futures.ThreadPoolExecutor() as executor:
executor.map(playlist_actual_search, pl.video_urls)
We create this function to do the search concurrently
def playlist_actual_search(pl_link):
#Saving All Links to a list to use it later
playlist_link_list.append(pl_link)
#Getting itag number is gonna happen with pafy
#Create a pafy object
video = pafy.new(pl_link)
#Get all streams from youtube
streams = video.allstreams
#Create a YouTube object to get every video individually
yt = YouTube(pl_link)
#Add yt to the yt.list
yt_list.append(yt)
#Get those streams that are normal(addaptive)
for stream in streams:
if "normal" in str(stream):
playlist_normal_stream_list.append(stream)
#Get itag value of the addaptive video
for normal_stream in playlist_normal_stream_list:
if "720" in str(normal_stream):
itag720 = normal_stream.itag
playlist_itag_value_dic["itag720"] = itag720
normal_stream = yt.streams.get_by_itag(itag720)
playlist720_video_size += normal_stream.filesize_approx
if "480" in str(normal_stream):
itag480 = normal_stream.itag
playlist_itag_value_dic["itag480"] = itag480
normal_stream = yt.streams.get_by_itag(itag480)
playlist480_video_size += normal_stream.filesize_approx
if "360" in str(normal_stream):
itag360 = normal_stream.itag
playlist_itag_value_dic["itag360"] = itag360
normal_stream = yt.streams.get_by_itag(itag360)
playlist360_video_size += normal_stream.filesize_approx
Related
My aim is to send an image file in whatsapp to various persons who are not on my contact list. Try with pywhatkit. Store the number in an excel sheet and read it by pandas. Loop the number to send the image file. The first number gets the image file successfully, and the remaining numbers WhatsApp will open but the message will not send. Also tell, me how to avoid WhatsApp open in a new tab(chrome)every time.
Here's my code
import pandas as pd
file_name = r'd:\\Recipients data.xlsx'
df = pd.read_excel(file_name)
contact = df['Contact'].tolist()
import pywhatkit
import time
from datetime import datetime
now = datetime.now()
current_time = now.strftime("%H:%M : %S")
h,m,s = current_time.split(':')
print(now)
h1 = int(h)
m1 = int(m)
s1 = int(s)
file_path = r'C:/Users/Asus/Downloads/subadev.jpeg'
for send_number in contact:
new_format = (f'"{str("+91")+str(send_number)}"')
print(new_format)
pywhatkit.sendwhats_image(new_format,file_path)
time.sleep(15)
expecting : send images to every number without open whats app every time
As a personal project, I decided to create one of the reddit text-to-speech bot.
I pulled all the data from reddit with praw
import praw, random
def scrapeData(subredditName):
# Instantiate praw
reddit = praw.Reddit()
# Get subreddit
subreddit = reddit.subreddit(subredditName)
# Get a bunch of posts and convert them into a list
posts = list(subreddit.new(limit=100))
# Get random number
randomNumber = random.randint(0, 100)
# Store post's title and description in variables
postTitle = posts[randomNumber].title
postDesc = posts[randomNumber].selftext
return postTitle + " " + postDesc
Then, I converted it to speech stored in a .mp3 file with gTTS.
from google.cloud import texttospeech
def convertTextToSpeech(textString):
# Instantiate TTS
client = texttospeech.TextToSpeechClient().from_service_account_json("path/to/json")
# Set text input to be synthesized
synthesisInput = texttospeech.SynthesisInput(text=textString)
# Build the voice request
voice = texttospeech.VoiceSelectionParams(language_code = "en-us",
ssml_gender = texttospeech.SsmlVoiceGender.MALE)
# Select the type of audio file
audioConfig = texttospeech.AudioConfig(audio_encoding =
texttospeech.AudioEncoding.MP3)
# Perform the TTS request on the text input
response = client.synthesize_speech(input = synthesisInput, voice =
voice, audio_config= audioConfig)
# Convert from binary to mp3
with open("output.mp3", "wb") as out:
out.write(response.audio_content)
I've created an .mp4 with moviepy that has generic footage in the background with the audio synced over it,
from moviepy.editor import *
from moviepy.video.tools.subtitles import SubtitlesClip
# get vide and audio source files
clip = VideoFileClip("background.mp4").subclip(20,30)
audio = AudioFileClip("output.mp3").subclip(0, 10)
# Set audio and create final video
videoClip = clip.set_audio(audio)
videoClip.write_videofile("output.mp4")
but my issue is I can't find a way to have only the current word or sentence displayed on screen as a subtitle, rather than the entire post.
I'm attempting to write a python project that plays multiple parts of a song at the same time.
For background information, a song is split into "stems", and then each stem is played simultaneously to recreate the full song. What I am trying to achieve is using potentiometers to control the volume of each stem, so that the user can mix songs differently. For a product relation, the StemPlayer from Kanye West is what I am trying to achieve.
I can change the volume of the overlayed song at the end, but what I want to do is change the volume of each stem using a potentiometer while the song is playing. Is this even possible using pyDub? Below is the code I have right now.
from pydub import AudioSegment
from pydub.playback import play
vocals = AudioSegment.from_file("walkin_vocals.mp3")
drums = AudioSegment.from_file("walkin_drums.mp3")
bass = AudioSegment.from_file("walkin_bass.mp3")
vocalsDrums = vocals.overlay(drums)
bassVocalsDrums = vocalsDrums.overlay(bass)
songQuiet = bassVocalsDrums - 20
play(songQuiet)
Solved this question, I ended up using pyaudio instead of pydub.
With pyaudio, I was able to define a custom stream_callback function. Within this callback function, I multiply each stem by a modifier, then add each stem to one audio output.
def callback(in_data, frame_count, time_info, status):
global drumsMod, vocalsMod, bassMod, otherMod
drums = drumsWF.readframes(frame_count)
vocals = vocalsWF.readframes(frame_count)
bass = bassWF.readframes(frame_count)
other = otherWF.readframes(frame_count)
decodedDrums = numpy.frombuffer(drums, numpy.int16)
decodedVocals = numpy.frombuffer(vocals, numpy.int16)
decodedBass = numpy.frombuffer(bass, numpy.int16)
decodedOther = numpy.frombuffer(other, numpy.int16)
newdata = (decodedDrums*drumsMod + decodedVocals*vocalsMod + decodedBass*bassMod + decodedOther*otherMod).astype(numpy.int16)
return (newdata.tobytes(), pyaudio.paContinue)
I wrote code to stream audio as simple as the following.
If no callback is registered (part **) this code works fine.
But I would like to register a play callback to preprocess the streamed data.
How to register a callback function can be found by looking at the python-vlc documentation.
But I can't figure out how to write the callback function.
The second argument to the callback function, samples, is a pointer to the data to be played.
How should I write a callback function at the (*) part with this pointer?
Any reply would be appreciated.
import vlc
import re
import requests
import ctypes
url = "http://serpent0.duckdns.org:8088/kbsfm.pls"
res = requests.get(url)
res.raise_for_status()
# retrieve url
p = re.compile("https://.+")
m = p.search(res.text)
url = m.group()
# AudioPlayCb = ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.c_void_p, ctypes.c_uint, ctypes.c_int64)
#vlc.CallbackDecorators.AudioPlayCb
def play_callback(opaque, samples, count, pts):
"""
#param data: data pointer as passed to L{libvlc_audio_set_callbacks}() [IN].
#param samples: pointer to a table of audio samples to play back [IN].
#param count: number of audio samples to play back.
#param pts: expected play time stamp (see libvlc_delay()).
"""
# HOW DO I RETURN SOMETHING FROM HERE? (*)
pass
return buffer
instance = vlc.Instance(["--prefetch-buffer-size=2000 --prefetch-read-size=5000 --network-caching=1000"]) #define VLC instance
media = instance.media_new(url)
player = media.player_new_from_media()
player.audio_set_format("f32l", 48000, 2)
# PLAYS WELL WITHOUT THIS LINE (**)
player.audio_set_callbacks(play=play_callback, pause=None, resume=None, flush=None, drain=None, opaque=None)
player.play() #Play the media
c = input()
I wrote the play callback function as below.
I think the buffer_array passed in should be copied to another buffer to play samples.
#vlc.CallbackDecorators.AudioPlayCb
def play_callback(data, samples, count, pts):
bytes_read = count * 4 * 2
buffer_array = ctypes.cast(samples, ctypes.POINTER(ctypes.c_char * bytes_read))
# This code is invalid. To do it right, where should buffer_array be copied to?
buffer = bytearray(bytes_read)
buffer[:] = buffer_array.contents
I'm trying to create an audio stream that has a constant audio source (in this case, audiotestsrc) to which I can occasionally add sounds from files (of various formats, that's why I'm using decodebin) through the play_file() method. I use an adder for that purpose. However, for some reason, I cannot add the second sound correctly. Not only does the program play the sound incorrectly, it also completely stops the original audiotestsrc. Here's my code so far:
import gst; import gobject; gobject.threads_init()
pipe = gst.Pipeline()
adder = gst.element_factory_make("adder", "adder")
first_sink = adder.get_request_pad('sink%d')
pipe.add(adder)
test = gst.element_factory_make("audiotestsrc", "test")
test.set_property('freq', 100)
pipe.add(test)
testsrc = test.get_pad("src")
testsrc.link(first_sink)
output = gst.element_factory_make("alsasink", "output")
pipe.add(output)
adder.link(output)
pipe.set_state(gst.STATE_PLAYING)
raw_input('Press key to play sound')
def play_file(filename):
adder_sink = adder.get_request_pad('sink%d')
audiofile = gst.element_factory_make('filesrc', 'audiofile')
audiofile.set_property('location', filename)
decoder = gst.element_factory_make('decodebin', 'decoder')
def on_new_decoded_pad(element, pad, last):
pad.link(adder_sink)
decoder.connect('new-decoded-pad', on_new_decoded_pad)
pipe.add(audiofile)
pipe.add(decoder)
audiofile.link(decoder)
pipe.set_state(gst.STATE_PAUSED)
pipe.set_state(gst.STATE_PLAYING)
play_file('sample.wav')
while True:
pass
Thanks to moch on #gstreamer, I realized that all adder sources should have the same format. I modified the above script so as to have the caps "audio/x-raw-int, endianness=(int)1234, channels=(int)1, width=(int)16, depth=(int)16, signed=(boolean)true, rate=(int)11025" (example) go before every input in the adder.