I have a script that needs to run audio through ffmpeg and pydub multiple times each. I realized that by the end of all this the audio sounds a bit muffled compared to the original. What steps can I take to make sure the quality is as high as possible at the end? I'm going from mp3/m4b to wav btw.
Related
I'm building a simple Python application that involves altering the speed of an audio track.
(I acknowledge that changing the framerate of an audio also make pitch appear different, and I do not care about pitch of the audio being altered).
I have tried using solution from abhi krishnan using pydub, which looks like this.
from pydub import AudioSegment
sound = AudioSegment.from_file(…)
def speed_change(sound, speed=1.0):
# Manually override the frame_rate. This tells the computer how many
# samples to play per second
sound_with_altered_frame_rate = sound._spawn(sound.raw_data, overrides={
"frame_rate": int(sound.frame_rate * speed)
})
# convert the sound with altered frame rate to a standard frame rate
# so that regular playback programs will work right. They often only
# know how to play audio at standard frame rate (like 44.1k)
return sound_with_altered_frame_rate.set_frame_rate(sound.frame_rate)
However, the audio with changed speed sounds distorted, or crackled, which would not be heard with using Audacity to do the same, and I hope I find out a way to reproduce in Python how Audacity (or other digital audio editors) changes the speed of audio tracks.
I presume that the quality loss is caused by the original audio having low framerate, which is 8kHz, and that .set_frame_rate(sound.frame_rate) tries to sample points of the audio with altered speed in the original, low framerate. Simple attempts of setting the framerate of the original audio or the one with altered framerate, and the one that were to be exported didn't work out.
Is there a way in Pydub or in other Python modules that perform the task in the same way Audacity does?
Assuming what you want to do is to play audio back at say x1.5 the speed of the original. This is synonymous to saying to resample the audio samples down by 2/3rds and pretend that the sampling rate hasn't changed. Assuming this is what you are after, I suspect most DSP packages would support it (search audio resampling as the keyphrase).
You can try scipy.signal.resample_poly()
from scipy.signal import resample_poly
dec_data = resample_poly(sound.raw_data,up=2,down=3)
dec_data should have 2/3rds of the number of samples as the original raw_data samples. If you play dec_data samples at the sound's sampling rate, you should get a sped-up version. The downside of using resample_poly is you need a rational factor, and having large numerator or denominator will cause output less ideal. You can try scipy's resample function or seek other packages, which supports audio resampling.
I have a number of .mp3 files which all start with a short voice introduction followed by piano music. I would like to remove the voice part and just be left with the piano part, preferably using a Python script. The voice part is of variable length, ie I cannot use ffmpeg to remove a fixed number of seconds from the start of each file.
Is there a way of detecting the start of the piano part and then know how many seconds to remove using ffmpeg or even using Python itself?.
Thank you
This is a non-trivial problem if you want a good outcome.
Quick and dirty solutions would involve inferred parameters like:
"there's usually 15 seconds of no or low-db audio between the speaker and the piano"
"there's usually not 15 seconds of no or low-db audio in the middle of the piano piece"
and then use those parameters to try to get something "good enough" using audio analysis libraries.
I suspect you'll be disappointed with that approach given that I can think of many piano pieces with long pauses and this reads like a classic ML problem.
The best solution here is to use ML with a classification model and a large data set. Here's a walk-through that might help you get started. However, this isn't going to be a few minutes of coding. This is a typical ML task that will involve collecting and tagging lots of data (or having access to pre-tagged data), building a ML pipeline, training a neural net, and so forth.
Here's another link that may be helpful. He's using a pretrained model to reduce the amount of data required to get started, but you're still going to put in quite a bit of work to get this going.
My goal is to be able to detect a specific noise that comes through the speakers of a PC using Python. That means the following, in pseudo code:
Sound is being played out of the speakers, by applications such as games for example,
ny "audio to detect" sound happens, and I want to detect that, and take an action
The specific sound I want to detect can be found here.
If I break that down, i believe I need two things:
A way to sample the audio that is being streamed to an audio device
I actually have this bit working -- with the code found here : https://gist.github.com/renegadeandy/8424327f471f52a1b656bfb1c4ddf3e8 -- it is based off of sounddevice example plot - which I combine with an audio loopback device. This allows my code, to receive a callback with data that is played to the speakers.
A way to compare each sample with my "audio to detect" sound file.
The detection does not need to be exact - it just needs to be close. For example there will be lots of other noises happening at the same time, so its more being able to detect the footprint of the "audio to detect" within the audio stream of a variety of sounds.
Having investigated this, I found technologies mentioned in this post on SO and also this interesting article on Chromaprint. The Chromaprint article uses fpcalc to generate fingerprints, but because my "audio to detect" is around 1 - 2 seconds, fpcalc can't generate the fingerprint. I need something which works across smaller timespaces.
Can somebody help me with the problem #2 as detailed above?
How should I attempt this comparison (ideally with a little example), based upon my sampling using sounddevice in the audio_callback function.
Many thanks in advance.
I have .wav files sampled at 192kHz and want to split them based on time to many smaller files while keeping the same sample rate.
To start with I thought I would just open and re-save the wav file using pydub in order to learn how to do this. However when I save it it appears to resave at a much lower file size, I'm not sure why, perhaps the sample rate is lower? and I also can't open the new file with the audio analysis program I usually use (Song scope).
So I had two questions:
- How to open, read, copy and resave a wav file using pydub without changing it? (Sorry I know this is probably easy I just can't find it yet).
Whether Python and Pydub are a sensible choice for what I am trying to do? Or maybe there is a much simpler way.
what I am exactly trying to do is:
Split about 10 high sample frequency wav files (~ 1GB each) into many
(about 100) small wave files. (I plan to make a list of start and end
times for each of the smaller wav files needed then get Python to open
copy and resave the wav file data between those times).
I assume it is possible since I've seen questions for lower frequency wav files, but if you know otherwise or know of a simpler way please let me know. Thanks!!
My code so far is as follows:
from pydub import AudioSegment
# Input audio file to be sliced
audio = AudioSegment.from_wav("20190212_164446.wav")
audio.export("newWavFile.wav")
(I put the wav file and ffmpeg into the same directory as the Python file to save time since was having a lot of trouble getting pydub to find ffmpeg).
In case it's relevant the files are of bat calls, these bats make calls between around 1kHz and 50kHz which is quite low frequency for bats. I'm trying to crop out the actual calls from some very long files.
I know this is a basic question, I just couldn't find the answer yet, please also feel free to direct me to the answer if it's a duplicate.
thanks!!
Seeking to random points in a video file with OpenCV seems to be much slower than in media players like Windows Media Player or VLC. I am trying to seek to different positions on a video file encoded in H264 (or MPEG-4 AVC (part10)) using VideoCapture and the time taken to seek to the position seems to be proportional to the frame number queried. Here's a small code example of what I'm trying to do:
import cv2
cap = cv2.VideoCapture('example_file')
frame_positions = [200, 400, 8000, 200000]
for frame_position in frame_positions:
cap.set(cv2.cv.CV_CAP_PROP_FRAMES, frame_position)
img = cap.read()
cv2.imshow('window', img)
cv2.waitKey(0)
The perceived times for when the images are displayed from above are proportional to the frame number. That is, frame number 200 and 400, barely have any delay, 8000 some noticeable lag, but 200000 would take almost half a minute.
Why isn't OpenCV able to seek as "quickly" as say Windows Media Player? Could it be that OpenCV is not using the FFMPEG codecs correctly while seeking? Would building OpenCV from sources with some alternate configuration for codecs help? If so, could someone tell me what the configuration could be?
I have only tested this on Windows 7 and 10 PCs, with OpenCV binaries as is, with relevant FFMPEG DLLs in system path.
Another observation: With OpenCV (binaries) versions greater than 2.4.9 (Example 2.4.11, 3.3.0), the first seek works, but not the subsequent ones. That is, it can seek to frame 200 from above example, but not to 400 and the rest; the video just jumps back to frame 0. But since it works for me with 2.4.9, I'm happy for now.
GPU acceleration should not matter for seeking, because you are not decoding frames. In addition, even if you were decoding frames, doing so on the GPU would be slower than on the CPU, because your CPU nowadays has video codecs "soldered" into the chip, which makes video decoding very fast, and there would have to be some book-keeping to shovel data from main memory into the GPU.
It sounds like OpenCV implements a "safe" way of seeking: Video files can contain stream offsets. For example, your audio stream may be set off against your video stream. As another example, you might have cut away the beginning of a video and saved the result. If your cut did not happen precisely at a key frame, video editing software like ffmpeg will include a small number of frames before your cut in the output file, in order to allow the frame at which your cut happened to be decoded properly (for which the previous frames might be necessary). In this case, too, there will be a stream offset.
In order to make sure that such offsets are interpreted the right way, that is, to really hit exactly the desired frame relative to "time 0", the only "easy", but expensive way is to really eat and decode all the video frames. And that's apparently what openCV is doing here. Your video players do not bother about this, because everyday users don't notice and the controls in the GUI are anyway much to imprecise.
I might be wrong about this. But answers to other questions and some experiments I conducted to evaluate them showed that only the "slow" way of counting the frames in a video gave accurate results.
It's likely because that is a very basic code example and the mentioned applications are doing something more clever.
A few points:
Windows Media Player has hardware acceleration
Windows Media Player almost definitly uses your GPU, you could try disabling this to see what difference it makes
VLC is an open source project so you could check out it's code to see how it does video seeking
VLC probably also uses your GPU
OpenCV provides GPU functions that will most likely make your code much quicker
If speed for seeking is important, you almost definitly want to work with the GPU when doing video operations:
https://github.com/opencv/opencv/blob/master/samples/gpu/video_reader.cpp
Here are some related github issues:
https://github.com/opencv/opencv/issues/4890
https://github.com/opencv/opencv/issues/9053
Re-encode your video with ffmpeg. It works for me.