I want to merge both audio files but it seems as if there is a pause of 2 Sec. Can anyone look into it further? It would be a great help.
import simpleaudio as sa
filename = '3.wav'
wave_obj = sa.WaveObject.from_wave_file(filename)
play_obj = wave_obj.play()
play_obj.wait_done()
filename = '4.wav'
wave_obj = sa.WaveObject.from_wave_file(filename)
play_obj = wave_obj.play()
play_obj.wait_done()`
I believe the problem is that after 3.wav ended, it takes a little time for the program to process the rest of the code. Let the program process both before starting one:
import simpleaudio as sa
filename1 = '3.wav'
filename2 = '4.wav'
wave_obj1 = sa.WaveObject.from_wave_file(filename1)
wave_obj2 = sa.WaveObject.from_wave_file(filename2)
play_obj1 = wave_obj1.play()
play_obj1.wait_done()
play_obj2 = wave_obj2.play()
play_obj2.wait_done()
Related
I am attempting to use the speech recognition toolkit VOSK and the speech diarization package Resemblyzer to transcibe audio and then identify the speakers in the audio.
Tools:
https://github.com/alphacep/vosk-api
https://github.com/resemble-ai/Resemblyzer
I can do both things individually but run into issues when trying to do them when running the one python script.
I used the following guide when setting up the diarization system:
https://medium.com/saarthi-ai/who-spoke-when-build-your-own-speaker-diarization-module-from-scratch-e7d725ee279
Computer specs are as follows:
Intel(R) Core(TM) i3-7100 CPU # 3.90GHz, 3912 Mhz, 2 Core(s), 4 Logical Processor(s)
32GB RAM
The following is my code, I am not to sure if using threading is appropriate or if I even implemented it correctly, how can I best optimize this code as to achieve the results I am looking for and not crash.
from vosk import Model, KaldiRecognizer
from pydub import AudioSegment
import json
import sys
import os
import subprocess
import datetime
from resemblyzer import preprocess_wav, VoiceEncoder
from pathlib import Path
from resemblyzer.hparams import sampling_rate
from spectralcluster import SpectralClusterer
import threading
import queue
import gc
def recognition(queue, audio, FRAME_RATE):
model = Model("Vosk_Models/vosk-model-small-en-us-0.15")
rec = KaldiRecognizer(model, FRAME_RATE)
rec.SetWords(True)
rec.AcceptWaveform(audio.raw_data)
result = rec.Result()
transcript = json.loads(result)#["text"]
#return transcript
queue.put(transcript)
def diarization(queue, audio):
wav = preprocess_wav(audio)
encoder = VoiceEncoder("cpu")
_, cont_embeds, wav_splits = encoder.embed_utterance(wav, return_partials=True, rate=16)
print(cont_embeds.shape)
clusterer = SpectralClusterer(
min_clusters=2,
max_clusters=100,
p_percentile=0.90,
gaussian_blur_sigma=1)
labels = clusterer.predict(cont_embeds)
def create_labelling(labels, wav_splits):
times = [((s.start + s.stop) / 2) / sampling_rate for s in wav_splits]
labelling = []
start_time = 0
for i, time in enumerate(times):
if i > 0 and labels[i] != labels[i - 1]:
temp = [str(labels[i - 1]), start_time, time]
labelling.append(tuple(temp))
start_time = time
if i == len(times) - 1:
temp = [str(labels[i]), start_time, time]
labelling.append(tuple(temp))
return labelling
#return
labelling = create_labelling(labels, wav_splits)
queue.put(labelling)
def identify_speaker(queue1, queue2):
transcript = queue1.get()
labelling = queue2.get()
for speaker in labelling:
speakerID = speaker[0]
speakerStart = speaker[1]
speakerEnd = speaker[2]
result = transcript['result']
words = [r['word'] for r in result if speakerStart < r['start'] < speakerEnd]
#return
print("Speaker",speakerID,":",' '.join(words), "\n")
def main():
queue1 = queue.Queue()
queue2 = queue.Queue()
FRAME_RATE = 16000
CHANNELS = 1
podcast = AudioSegment.from_mp3("Podcast_Audio/Film-Release-Clip.mp3")
podcast = podcast.set_channels(CHANNELS)
podcast = podcast.set_frame_rate(FRAME_RATE)
first_thread = threading.Thread(target=recognition, args=(queue1, podcast, FRAME_RATE))
second_thread = threading.Thread(target=diarization, args=(queue2, podcast))
third_thread = threading.Thread(target=identify_speaker, args=(queue1, queue2))
first_thread.start()
first_thread.join()
gc.collect()
second_thread.start()
second_thread.join()
gc.collect()
third_thread.start()
third_thread.join()
gc.collect()
# transcript = recognition(podcast,FRAME_RATE)
#
# labelling = diarization(podcast)
#
# print(identify_speaker(transcript, labelling))
if __name__ == '__main__':
main()
When I say crash I mean everything freezes, I have to hold down the power button on the desktop and turn it back on again. No blue/blank screen, just frozen in my IDE looking at my code. Any help in resolving this issue would be greatly appreciated.
Pydubs AudioSegment was not returning a suitable type for the Resembylzer function preprocess_wav.
podcast = AudioSegment.from_mp3("Podcast_Audio/Film-Release-Clip.mp3")
preprocess_wav instead requires a Numpy Array / Path.
audio_file_path = 'Podcast_Audio/WAV-Film-Release-Clip.wav'
wav_fpath = Path(audio_file_path)
wav = preprocess_wav(wav_fpath)
Additionally preprocess_wav functionality can be achieved using Librosa if desired.
import librosa
def preprocess_wav(waveform, sr):
waveform = librosa.resample(waveform, orig_sr=sr, target_sr=16000)
waveform = waveform.astype(np.float32) / np.max(np.abs(waveform))
return waveform
waveform, sr = librosa.load('Podcast_Audio/WAV-Film-Release-Clip.wav')
wav = preprocess_wav(waveform, sr)
I'm trying to make a video the same duration as audio clips
This kinda works, but after 2 seconds (subclip duration), the image just freezes as the audio continues
I was trying to achieve the same behavior as in this tutorial, where it seems that the video repeats itself. My original video has only 2 seconds
import moviepy.editor as mp
raw_video = mp.VideoFileClip("videotest.mp4", audio=False)
raw_audio = mp.AudioFileClip("frei.mp3")
raw_video = raw_video.subclip(0, 2)
my_video = raw_video.set_duration(raw_audio.duration)
my_video.audio = raw_audio
my_video.write_videofile('result.mp4')
This is the solution I've found, but don't really know if there is a better way. Is taking too long to write the video
import moviepy.editor as mp
import math
raw_video = mp.VideoFileClip("videotest.mp4", audio=False)
raw_audio = mp.AudioFileClip("frei.mp3")
# array de vídeos até completar a duração do áudio
amount = math.ceil(raw_audio.duration / raw_video.duration)
list = [raw_video for i in range(amount)]
final_video = mp.concatenate_videoclips(list, method='compose')
final_video.audio = raw_audio
final_video.write_videofile('result42.mp4')
I have 1,440 audio files to feed into a neural network. The problem is that they are not all the same length. I used the answer posted on:
Adding silent frame to wav file using python
but it doesn't seem to work. I wanted to add a few seconds of silence to the end of my files, and then trim them to all be 5 seconds long. Can someone please help me with this?
(I also tried using pysox, but that gives me the This install of SoX cannot process .wav files. error.)
I am using Google Colab for this. The code is:
import wave, os, glob
from pydub import AudioSegment
from pydub.playback import play
path = 'drive/MyDrive/Ravdess/Sad' #This is the folder from my Google Drive which has the audio files
count = 0
for filename in glob.glob(os.path.join(path, '*.wav')):
w = wave.open(filename, 'r')
d = w.readframes(w.getnframes())
frames = w.getnframes()
rate = w.getframerate()
duration = frames/float(rate)
count+=1
print(filename, "count =", count, "duration = ", duration)
audio_in_file = filename
audio_out_file = "out.wav"
new_duration = duration
#Only append silence until time = 5 seconds.
one_sec = AudioSegment.silent(duration=2000) #duration in milliseconds
song = AudioSegment.from_wav(audio_in_file)
final_song = one_sec + song
new_frames = w.getnframes()
new_rate = w.getframerate()
new_duration = new_frames/float(rate)
final_song.export(audio_out_file, format="wav")
print(final_song, "count =", count, "new duration = ", new_duration)
w.close()
This gives the output:
drive/MyDrive/Ravdess/Sad/03-01-04-01-02-01-01.wav count = 1 duration = 3.5035
<pydub.audio_segment.AudioSegment object at 0x7fd5b7ca06a0> count = 1 new duration = 3.5035
drive/MyDrive/Ravdess/Sad/03-01-04-01-02-02-01.wav count = 2 duration = 3.370041666666667
<pydub.audio_segment.AudioSegment object at 0x7fd5b7cbc860> count = 2 new duration = 3.370041666666667
... (and so on for all the files)
Since you are already using pydub, I'd do something like this:
from pydub import AudioSegment
from pydub.playback import play
input_wav_file = "/path/to/input.wav"
output_wav_file = "/path/to/output.wav"
target_wav_time = 5 * 1000 # 5 seconds (or 5000 milliseconds)
original_segment = AudioSegment.from_wav(input_wav_file)
silence_duration = target_wav_time - len(original_segment)
silenced_segment = AudioSegment.silent(duration=silence_duration)
combined_segment = original_segment + silenced_segment
combined_segment.export(output_wav_file, format="wav")
I am very green about programming but wish to learn and develop.
I want to write a simple application that will be useful in linguistic treatments - but at first it is simple demo.
The application is about to display image and record sound during projection.
There are few variables - interval and image/sound/movie clip paths - taken from external txt file (for the beginning - later I would like to perform some creator with presaved configurations).
The config file now looks like:
10
path1
path2
...
The first line is about to input interval in seconds, next there are paths to images, sounds or movie clips (I tried with images for now).
#!/usr/bin/python
# main.py
import sys
from PyQt4 import QtGui, QtCore
from Tkinter import *
import numpy as np
import pyaudio
import wave
import time
from PIL import Image, ImageTk
import multiprocessing
import threading
from threading import Thread
master = Tk()
conf_file = open("conf.txt", "r") #open conf file read only
conf_lines = conf_file.readlines()
conf_file.close()
interwal = conf_lines[0] #interval value from conf.txt file
bodziec1 = conf_lines[1] #paths to stimulus file (img / audio / video)
bodziec2 = conf_lines[2]
bodziec3 = conf_lines[3]
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
RECORD_SECONDS = interwal #every stimulus has it's own audio record file for further work
timestr = time.strftime("%Y%m%d-%H%M%S") #filename is set to year / month / day - hour / minute / second for easier systematization
def nagrywanie(): #recording action - found somewhere in the all-knowing web
p = pyaudio.PyAudo()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
print("* nagrywanie") #info about record to start
frames = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
print("* koniec nagrywania") #info about record to end
stream.stop_stream()
stream.close()
p.terminate()
wf = wave.open(timestr, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
def bod1(): #stimulus 1st to display / play
image = Image.open(bodziec1)
photo = ImageTk.PhotoImage(image)
def bod2():
image = Image.open(bodziec2) #stimulus 2nd to display / play
photo = ImageTk.PhotoImage(image)
def bod3():
image = Image.open(bodziec3) #stimulus 3rd to display / play
photo = ImageTk.PhotoImage(image)
def odpal(): #attemption to run display and recording at the same time
Thread(target = bod1).start()
Thread(target = nagrywanie).start()
# Wait interwal for odpal #give impetus for time in first line of the conf.txt
time.sleep(interwal)
# Terminate odpal #stop giving impetus
bod1.terminate()
# Cleanup #?? this part is also copied from all-knowing internet
p.join()
b = Button(master, text="OK", command=odpal) #wanted the program to be easier for non-programmers to operate so few buttons are necessary
b.pack()
mainloop()
When asked few programmers about the code it is as simple as riding a bike, so I wanted to learn how to write it by myself.
I guess it is peace of cake for professionals - 1000s of thanks to these ones who want even to read this junk.
It takes a lot of time for me to understand and figure out the exact commends that is why I am asking politely about the help - not only for education but also for better diagnosis.
Excuse me for the language - English is not my native language.
I'm trying to create an audio stream that has a constant audio source (in this case, audiotestsrc) to which I can occasionally add sounds from files (of various formats, that's why I'm using decodebin) through the play_file() method. I use an adder for that purpose. However, for some reason, I cannot add the second sound correctly. Not only does the program play the sound incorrectly, it also completely stops the original audiotestsrc. Here's my code so far:
import gst; import gobject; gobject.threads_init()
pipe = gst.Pipeline()
adder = gst.element_factory_make("adder", "adder")
first_sink = adder.get_request_pad('sink%d')
pipe.add(adder)
test = gst.element_factory_make("audiotestsrc", "test")
test.set_property('freq', 100)
pipe.add(test)
testsrc = test.get_pad("src")
testsrc.link(first_sink)
output = gst.element_factory_make("alsasink", "output")
pipe.add(output)
adder.link(output)
pipe.set_state(gst.STATE_PLAYING)
raw_input('Press key to play sound')
def play_file(filename):
adder_sink = adder.get_request_pad('sink%d')
audiofile = gst.element_factory_make('filesrc', 'audiofile')
audiofile.set_property('location', filename)
decoder = gst.element_factory_make('decodebin', 'decoder')
def on_new_decoded_pad(element, pad, last):
pad.link(adder_sink)
decoder.connect('new-decoded-pad', on_new_decoded_pad)
pipe.add(audiofile)
pipe.add(decoder)
audiofile.link(decoder)
pipe.set_state(gst.STATE_PAUSED)
pipe.set_state(gst.STATE_PLAYING)
play_file('sample.wav')
while True:
pass
Thanks to moch on #gstreamer, I realized that all adder sources should have the same format. I modified the above script so as to have the caps "audio/x-raw-int, endianness=(int)1234, channels=(int)1, width=(int)16, depth=(int)16, signed=(boolean)true, rate=(int)11025" (example) go before every input in the adder.