Slow down playing song without reloading mixer - python

I'm looking in learning how to use pygame, and with that experimenting with stuff and making a little test game.
Now, I haven't found an answer for something. I want to have a music (pygame.mixer.music) and perhaps sounds (pygame.mixer.Sound) play normally, but when asked, change to a specific frequency. At this point all I've found is to do pygame.mixer.init(frequency=9000) and when required, call pygame.mixer.quit() then pygame.mixer.init(frequency=9001) which is what I want to avoid, as I would like to have it play without the song cutting down, sound like it never stopped and it be on same point it was on. I'm not sure if it's possible or not in pygame, hence me asking. If it isn't I am open to recommendations for libraries that do.
EDIT: In case of mentioning a lib that can do this, I'd prefer it to support OGG. Although would be nice if it could be done in pygame.

pygame.mixer does not support playing the result of a resampled transformed stream. You must resample dynamically. To reduce the speed by half, you must transform from X samples per second to 2X samples, and then make the audio device play them at X samples per second.

Related

How to calculate beats per minute from heart sounds recorded through android MIC?

I have many .wav files having heart sounds recorded through MIC by putting phone directly on people chest. I want to calculate BPM from these sounds. Could you please help regarding this? Any library,algorithm or tutorials?
Can you (are you allowed to) put some sample somewhere?
I've played with some ECG (up to 12 eletrode) and neural signals (spikes look a lot similar to the R-S transition). Those spikes were so big, a simple find_peaks from scipy.signal was enough to detect them. I used a butterworth filter before that though. You might need that too, filtering out the 50/60Hz mains is common, there might be similar noises in audio as well.
After finding the peaks, beats per minute is a division (and probably some averaging).
What you're trying to do is essentially calculate the fourier domain for the given sound file, and then identify the strongest peak. That's likely going to be the frequency of your dominant signal (which in this case should be the heart-rate).
Thankfully, someone else has already asked / answered this on stackoverflow.
The only caveat with this approach, is if there are other repetitive signals that dominate the heart-beat, in which case you may need to clean your data first.

Sound properties manipulation in python

I am searching for a lib that helps me to use many sound properties.
I mean, I need something to get each frequency of sounds, get the sound waves length and width, get the peak and trough (in a measurement way) of the sounds.
I need something that leads me as close as possible to manipulate and measure sounds waves in some ways, this is something that I need more for a scientific research than for an application.
It is hard to find something like that, If you could help me with some links or anything, would be a great help for me.
If you have something even in other languages, it could help me.
I will keep this question updated as I find answers as well.
Thanks in advance.
The Python wiki page PythonInMusic has a lot of links, some of which will probably be useful to you. It includes a whole range of projects to input and output sound in different formats. A quick glance shows a couple of more specialised projects that might also be helpful:
audiolab - bridges the gap between numpy and sound formats
musickit - support for signal processing, and apparently used in 'scientific experiments'
These will probably give you the tools to read sounds in and convert them into a useful form for analysis.
After that, it seems to me that what you are describing is more about signal/waveform analysis, than sound per se, so that may be a more helpful direction to search in. I'm not aware of any Python package that does exactly what you're looking for. Measurement of things like wavelength, peak and trough doesn't sound particularly difficult to me though - you could look at coding your own routines for this using SciPy.

Python: Most accurate way to play time the playing of sound samples

There are plenty of question and answers about playing sound samples in Python, but I'm interested to know the most accurate way to time the playing of samples. Suppose I'm writing a bit of Python capable of playing a complex rhythm made up from samples of drum hits: I want the timer-based triggering of my audio samples to be as accurate as possible.
Any recommendations? Would be happy to hear any ideas such as "Audio library X does accurate timings fine", or "The most accurate general timing mechanism in Python is Y", etc.
For sound samples I only have experience with SDL which you can use through the Pygame package:
http://www.pygame.org/docs/ref/mixer.html
While I can't speak to its performance with the kind of application you have in mind SDL is used in a fair number of games so I imagine it's performance is fairly solid. I can say that Pygame is quite easy to figure out and has quite a bit of documentation and example games to look at.
Some interesting audo related projects using pygame:
http://www.pygame.org/project-noiselib-1442-2573.html
http://www.pygame.org/project-pygame+music+grid+beta+.9-1185-.html

Recognising tone of the audio

I have a guitar and I need my pc to be able to tell what note is being played, recognizing the tone. Is it possible to do it in python, also is it possible with pygame? Being able of doing it in pygame would be very helpful.
To recognize the frequency of an audio signal, you would use the FFT (fast Fourier transform) algorithm. As far as I can tell, PyGame has no means to record audio, nor does it support the FFT transform.
First, you need to capture the raw sampled data from the sound card; this kind of data is called PCM (Pulse Code Modulation). The simplest way to capture audio in Python is using the PyAudio library (Python bindings to PortAudio). GStreamer can also do it, it's probably an overkill for your purposes. Capturing 16-bit samples at a rate of 48000 Hz is pretty typical and probably the best a normal sound card will give you.
Once you have raw PCM audio data, you can use the fftpack module from the scipy library to run the samples through the FFT transform. This will give you a frequency distribution of the analysed audio signal, i.e., how strong is the signal in certain frequency bands. Then, it's a matter of finding the frequency that has the strongest signal.
You might need some additional filtering to avoid harmonic frequencies I am not sure.
I once wrote a utility that does exactly that - it analyses what sounds are being played.
You can look at the code here (or you can download the whole project. its integrated with Frets On Fire, a guitar hero open source clone to create a real guitar hero). It was tested using a guitar, an harmonica and whistles :) The code is ugly, but it works :)
I used pymedia to record, and scipy for the FFT.
Except for the basics that others already noted, I can give you some tips:
If you record from mic, there is a lot of noise. You'll have to use a lot of trial-and-error to set thresholds and sound clean up methods to get it working. One possible solution is to use an electric guitar, and plug its output to the audio-in. This worked best for me.
Specifically, there is a lot of noise around 50Hz. That's not so bad, but its overtones (see below) are at 100 Hz and 150 Hz, and that's close to guitar's G2 and D3.... As I said my solution was to switch to an electric guitar.
There is a tradeoff between speed of detection, and accuracy. The more samples you take, the longer it will take you to detect sounds, but you'll be more accurate detecting the exact pitch. If you really want to make a project out of this, you probably need to use several time scales.
When a tones is played, it has overtones. Sometimes, after a few seconds, the overtones might even be more powerful than the base tone. If you don't deal with this, your program with think it heard E2 for a few seconds, and then E3. To overcome this, I used a list of currently playing sounds, and then as long as this note, or one of its overtones had energy in it, I assumed its the same note being played....
It is specifically hard to detect when someone plays the same note 2 (or more) times in a row, because it's hard to distinguish between that, and random fluctuations of sound level. You'll see in my code that I had to use a constant that had to be configured to match the guitar used (apparently every guitar has its own pattern of power fluctuations).
You will need to use an audio library such as the built-in audioop.
Analyzing the specific note being played is not trivial, but can be done using those APIs.
Also could be of use: http://wiki.python.org/moin/PythonInMusic
Very similar questions:
Audio Processing - Tone Recognition
Real time pitch detection
Real-time pitch detection using FFT
Turning sound into a sequence of notes is not an easy thing to do, especially with multiple notes at once. Read through Google results for "frequency estimation" and "note recognition".
I have some Python frequency estimation examples, but this is only a portion of what you need to solve to get notes from guitar recordings.
This link shows some one doing it in VB.NET but the basics of what need to be done to achieve your goal is captured in these links below.
STFT
Colley Tukey
FFT

Playing sounds with python and changing their tone during playback?

Is there a way to do this? Also, I need this to work with pygame, since I want audio in my game. I'm asking this because I didn't see any tone change function in pygame.. Anyone knows?
Update:
I need to do something like the noise of a car accelerating. I don't really know if it is timbre or tone.
Well, it depends on how you're doing your sounds: I'm not sure if this is possible with pygame, but SDL (which pygame is based off of) lets you have a callback to retrieve data for the sound buffer, and it's possible to change the frequency of the sine wave (or whatever) to get different tones in the callback, given that you generate the sound there.
If you're using a pre-rendered tone, or sound file, then you'll probably have to resample it to get it to play at different frequencies, although it'd be difficult to keep the same length. If you're talking about changing the timbre of the sound, then that's a whole different ballpark...
Also, it depends on how fast the sound needs to change: if you can accept a little lag in response, you could probably generate a few short sounds, and play/loop them as necessary. I'm not sure how constant replaying of sounds would impact performance/the overall audio quality, though: you'd have to make sure the ends of all the waveform ends smoothly transition to the beginning of the next one (maybe).

Categories

Resources