On a Mac laptop (OS 10.9.5), when playing sounds from a python program I will get an initial start-up latency of 0.5 s before the sound plays. If I have been playing sound within the last minute or so, there is no such latency. I have seen passing references to this kind of thing online but without much insight (e.g., http://music.columbia.edu/pipermail/portaudio/2014-November/016364.html). I looked for an Apple API way to disable it (like there is for the screen saver) but did not see anything obvious. The issue might be specific to laptops, as a power-saving feature for example. It happens not only on battery power, but also when plugged in.
Question: From python on OSX, how to tell the Mac to do whatever it needs to do to avoid that 0.5 sec latency the first time it plays a sound?
Constraints: Calling some command like pmset via subprocess is acceptable, unless it needs root (sudo) priv's; i.e., only normal user-space commands are acceptable. Also not acceptable: its easy to write a little thread to play a short ~silent sound every 30sec or so, but that will add some complexity to the program and use resources -- there has to be a better way to do it.
Is the delay possibly due to attempting to play a HUGE sound file ? Is the media file previously loaded into memory prior to this request to render it ?
Try to load the audio media into a buffer then upon a play signal perform the render directly from the buffer
Related
Out of the blue (although I might have missed some automated update), the flip() method of pyglet on my P.C. became about 100 times slower (my script goes from about 20 to 0.2 FPS, and profiling shows that flip() is to blame).
I don't understand this fully but since my OS is windows 10, the method seems to just be a way to run the wglSwapLayerBuffers OpenGL double-buffering cycle in python. Everything else seems to have a normal speed, including programs that use OpenGL. This has happened before and fixed itself after a restart, so I didn't really look further into it at the time.
Now, restarting doesn't change anything. I updated my GPU driver, I tried disabling vsync, I looked for unrelated processes that might use a lot of memory and/or GPU memory. I re-installed the latest stable version of pyglet.
Now I have no idea how to even begin troubleshooting this...
Here's a minimal example that prints me 0.2s instead of 20s.
from pyglet.gl import *
def timing(dt):
print(1/dt)
game_window = pyglet.window.Window(1,1)
if __name__ == '__main__':
pyglet.clock.schedule_interval(timing, 1/20.0)
pyglet.app.run()
(Within pyglet.app.run(), profiling shows me that it's the flip() method that takes basically all the time).
Edit: my real script, which displays frequently updated images using pyglet, causes no increase in GPU usage whatsoever (I also checked the effect of a random program (namely Minecraft) to make sure the GPU monitoring tool I use works, and it does cause an increase). I think this rules out the possibility that I somehow don't have enough computing power available due to some unrelated issue.
OK, I found a way to solve my issue in this google groups conversation about a different problem with the same method: https://groups.google.com/forum/#!topic/pyglet-users/7yQ9viOu75Y (the changes suggested in claudio canepa's reply, namely making flip() link to the GDI version of the same function instead of wglSwapLayerBuffers, brings things back to normal).
I'm still not sure why wglSwapLayerBuffers behaved so oddly in my case. I guess problems like mine are part of the reason why the GDI version is "recommended". However understanding why my problem is even possible would still be nice, if someone gets what's going on... And having to meddle with a relatively reliable and respected library just to perform one of its most basic tasks feels really, really dirty, there must be a more sensible solution.
When I play a sound every 0.5 second with PyGame:
import pygame, time
pygame.mixer.init()
s = pygame.mixer.Sound("2.wav")
for i in range(8):
pygame.mixer.Channel(i).play(s)
time.sleep(0.5)
it doesn't respect the timing correctly at all.
It's like there are pause of 0.2 sec than 0.7 sec then 0.2 sec again, it's very irregular.
Notes:
I know that time.sleep() is not the most accurate in the world, but even with the more accurate solutions from here, the problem is still present
Tested on a RaspberryPi
The problem is still there if I play many different files s[i].play(), with i in a big range. So the problem doesn't come from the fact it tries to replay the same file
Here is the reason:
Even if we decrease the audio buffer to the minimum supported by the soundcard (1024 or 512 samples instead of pygame's default 4096), the differences will still be there, making irregulat what should be a "metronome beat".
I'll update with a working solution as soon as I find one. (I have a few ideas in this direction).
As you wrote in your own answer, the reason for the timing problems very likely is the fact that the audio callback runs decoupled from the rest of the application.
The audio backend typically has some kind of a clock which is accessible from both inside the callback function and outside of it.
I see two possible solutions:
use a library that allows you to implement the callback function yourself, calculate the starting times of your sounds inside the callback function, compare those times with the current time of the "audio clock" and write your sound to the output at the appropriate position in the output buffer.
use a library that allows you to specify the exact time (in terms of the "audio clock") when to start playing your sounds. This library would do the steps of the previous point for you.
For the first option, you could use the sounddevice module. The callback function (which you'll have to implement) will get an argument named time, which has an attribute time.outputBufferDacTime, which is a floating point value specifying the time (in seconds) when the first sample of the output buffer will be played back.
Full disclosure: I'm the author of the sounddevice module, so my recommendation is quite biased.
Quite recently, I've started working on the rtmixer module, which can be used for the second option.
Please note that this is in very early development state, so use it with caution.
With this module, you don't have to write a callback function, you can use the function rtmixer.Mixer.play_buffer() to play an audio buffer at a specified time (in seconds). For reference, you can get the current time from rtmixer.Mixer.time.
I'm trying to write something that catches the audio being played to the speakers/headphones/soundcard and see whether it is playing and what the longest silence is. This is to test an application that is being executed and to see if it stops playing audio after a certain point, as such i don't actually need to really know what the audio itself is, just whether or not there is audio playing.
I need this to be fully programmatic (so not requiring the use of GUI tools or the like, to set up an environment). I know applications like projectM do this, I just can't for the life of me find anything anywhere that denotes how.
An audio level meter would also work for this, as would ossiliscope data or the like, really would take any recommendation.
Here is a very similar question: record output sound in python
You could try to route your output to a new device with jack and record this with portaudio. There are Python Bindings for portaudio called pyaudio and for jack called PyJack. I have never used the latter one but pyaudio works great.
Is it possible to get the system output audio (the exact same thing that goes through the speakers) and analyze it in real time with Python? My intention is to build a sound visualizer. I know that it is possible to access the microphone with pyaudio, but I was not able to access the sound card output in any way, I'm looking for a solution that works on Windows.
Thank you for reading.
Not sure how this project is doing these days, it's been a long time since it's been updated. PyVST allows you to run python code in a VST inside a VST host, which makes it possible to handle realtime audio events.
You might want to look at http://code.google.com/p/pyo/ for some ideas about how to handle DSP data as well.
First of all, I am not a veteran programmer i any language. But I've been tinkering with python pretty substantially the last couple of months so I wouldn't consider my self completely green either.
Some keywords for you:
- Windows
- Python 2.6
- Pygame, CGKit
Okay, so I've got the CGKit module, which contains a WinTab wrapper for capturing data from the Wacom tablet. WinTab requires a certain window to be active in order for it to start capturing, and for that I'm using PyGame. However, PyGame is pretty brutal on the CPU and gives me between 100-200 fps on drawing simple text and rectangles (meters for the wacom input data.) and about 200-400 fps when not 'blitting' anything.
Now, the tablet hardware, and the WinTab API support a transfer rate of 200hz, which is all good. The problem is that the data I'm getting from WinTab isn't at 200hz (5ms per packet) but is instead at the current framerate of my PyGame window, which, on top of everything, is not static.
So you see the problem. In order for WinTab to acquire any data, it has to have a window assigned to it and it need to be 'active'. But having a PyGame window open, means that the stream of data is limited to the framerate of the pygame window.
I'm sure there are other window managers I could be using that won't take up any or little CPU, but what I'd really like is for WinTab to acquire the data at constant 200hz rate without any dependencies.
I'm thinking threading. Breaking up the gather-data and drawing parts, but since WinTab need a window to get any data in the first place, I can't figure out how that would be possible.
Also note that I've never threaded anything before, although I do understand the concept.
So, hope I made the problem reasonably clear.
The question is, how can I get the data at a minimum of 200hz, while still being able to do maybe 20-30 fps on my PyGame window?
without being experienced on the subject, I would say that threads are not a good idea for precise timing functions. From what I studied there is not a precise timing enforced on them .
I remember there is a function inside the time pygame module that forces your code to run at a specific time and thus limits the FPS. That is for if your code runs too fast.
Now if your app is proving too slow for the 200Mhz rate, that is it takes more than 5ms to loop then you will have to move some of your code to C/C++ domain and avoid using pygame for at least that part. I advice using cython , since cython allow you to write only python code and you dont need to know C/C++. But of course you can mix python with C/C++ and even Fortran with cython, its extremely flexible and easy to use.
Cython Website
My experience with pygame on an Atom 1,6 processor , which is of course very slow, gave me 1 ms for zero redraws, so pygame can be really fast but not blazzing fast. It will highly depend on what you draw in the screen during your loop. I would guess that on a core duo that 1 ms should drop to at least 0.3 ms. So it will also depend on your processing speed.
Another approach is the multiprocessing module, which it can take full avantage of multiple cores and assign one core for your app and another core for receiving data from the tablet.
Multiprocessing module documentation
There are literally hundrends of ways to speed up python.