Edit - I eventually figured out the answer and have posted it below.
Using .audio_set_volume() on a media_player_new() object works fine with values 0-100, but it's much quieter than the corresponding value in normal VLC is, by a factor of around 2-3. This can be remedied by using values greater than 100, but this introduces the problem of severe delays while changing the volume (not delays in the video or audio, just ~half-second delays before the volume updates).
No issues with my volume mixing from what I can tell. The player is being embedded in PyQt5. I can't find anyone else with this issue so I imagine there's an easy workaround I'm missing.
I never got a response, but I eventually figured it out on my own: Firstly, running it through a command prompt/natively through Python causes the volume to be lower than normal (no idea why). This goes away when compiling or using your script as a default program.
Secondly, there's a VLC command-line argument called --gain you can set that seems to default to a lower value when using libvlc directly versus what VLC actually uses. When defining your instance, specify the argument like so (it takes a float value from 0-8):
instance = vlc.Instance(['--gain=8.0']) # make sure the arguments are in a list!
A gain of 8.0 is definitely higher than what VLC natively uses, but it's not ear-shatteringly loud. From what I can tell, the quality is not degraded at all and there's no delay while adjusting the volume with --gain set.
Don't forget to include any other arguments in the list if desired, such as ones from sys.argv.
Related
Out of the blue (although I might have missed some automated update), the flip() method of pyglet on my P.C. became about 100 times slower (my script goes from about 20 to 0.2 FPS, and profiling shows that flip() is to blame).
I don't understand this fully but since my OS is windows 10, the method seems to just be a way to run the wglSwapLayerBuffers OpenGL double-buffering cycle in python. Everything else seems to have a normal speed, including programs that use OpenGL. This has happened before and fixed itself after a restart, so I didn't really look further into it at the time.
Now, restarting doesn't change anything. I updated my GPU driver, I tried disabling vsync, I looked for unrelated processes that might use a lot of memory and/or GPU memory. I re-installed the latest stable version of pyglet.
Now I have no idea how to even begin troubleshooting this...
Here's a minimal example that prints me 0.2s instead of 20s.
from pyglet.gl import *
def timing(dt):
print(1/dt)
game_window = pyglet.window.Window(1,1)
if __name__ == '__main__':
pyglet.clock.schedule_interval(timing, 1/20.0)
pyglet.app.run()
(Within pyglet.app.run(), profiling shows me that it's the flip() method that takes basically all the time).
Edit: my real script, which displays frequently updated images using pyglet, causes no increase in GPU usage whatsoever (I also checked the effect of a random program (namely Minecraft) to make sure the GPU monitoring tool I use works, and it does cause an increase). I think this rules out the possibility that I somehow don't have enough computing power available due to some unrelated issue.
OK, I found a way to solve my issue in this google groups conversation about a different problem with the same method: https://groups.google.com/forum/#!topic/pyglet-users/7yQ9viOu75Y (the changes suggested in claudio canepa's reply, namely making flip() link to the GDI version of the same function instead of wglSwapLayerBuffers, brings things back to normal).
I'm still not sure why wglSwapLayerBuffers behaved so oddly in my case. I guess problems like mine are part of the reason why the GDI version is "recommended". However understanding why my problem is even possible would still be nice, if someone gets what's going on... And having to meddle with a relatively reliable and respected library just to perform one of its most basic tasks feels really, really dirty, there must be a more sensible solution.
When I play a sound every 0.5 second with PyGame:
import pygame, time
pygame.mixer.init()
s = pygame.mixer.Sound("2.wav")
for i in range(8):
pygame.mixer.Channel(i).play(s)
time.sleep(0.5)
it doesn't respect the timing correctly at all.
It's like there are pause of 0.2 sec than 0.7 sec then 0.2 sec again, it's very irregular.
Notes:
I know that time.sleep() is not the most accurate in the world, but even with the more accurate solutions from here, the problem is still present
Tested on a RaspberryPi
The problem is still there if I play many different files s[i].play(), with i in a big range. So the problem doesn't come from the fact it tries to replay the same file
Here is the reason:
Even if we decrease the audio buffer to the minimum supported by the soundcard (1024 or 512 samples instead of pygame's default 4096), the differences will still be there, making irregulat what should be a "metronome beat".
I'll update with a working solution as soon as I find one. (I have a few ideas in this direction).
As you wrote in your own answer, the reason for the timing problems very likely is the fact that the audio callback runs decoupled from the rest of the application.
The audio backend typically has some kind of a clock which is accessible from both inside the callback function and outside of it.
I see two possible solutions:
use a library that allows you to implement the callback function yourself, calculate the starting times of your sounds inside the callback function, compare those times with the current time of the "audio clock" and write your sound to the output at the appropriate position in the output buffer.
use a library that allows you to specify the exact time (in terms of the "audio clock") when to start playing your sounds. This library would do the steps of the previous point for you.
For the first option, you could use the sounddevice module. The callback function (which you'll have to implement) will get an argument named time, which has an attribute time.outputBufferDacTime, which is a floating point value specifying the time (in seconds) when the first sample of the output buffer will be played back.
Full disclosure: I'm the author of the sounddevice module, so my recommendation is quite biased.
Quite recently, I've started working on the rtmixer module, which can be used for the second option.
Please note that this is in very early development state, so use it with caution.
With this module, you don't have to write a callback function, you can use the function rtmixer.Mixer.play_buffer() to play an audio buffer at a specified time (in seconds). For reference, you can get the current time from rtmixer.Mixer.time.
On a Mac laptop (OS 10.9.5), when playing sounds from a python program I will get an initial start-up latency of 0.5 s before the sound plays. If I have been playing sound within the last minute or so, there is no such latency. I have seen passing references to this kind of thing online but without much insight (e.g., http://music.columbia.edu/pipermail/portaudio/2014-November/016364.html). I looked for an Apple API way to disable it (like there is for the screen saver) but did not see anything obvious. The issue might be specific to laptops, as a power-saving feature for example. It happens not only on battery power, but also when plugged in.
Question: From python on OSX, how to tell the Mac to do whatever it needs to do to avoid that 0.5 sec latency the first time it plays a sound?
Constraints: Calling some command like pmset via subprocess is acceptable, unless it needs root (sudo) priv's; i.e., only normal user-space commands are acceptable. Also not acceptable: its easy to write a little thread to play a short ~silent sound every 30sec or so, but that will add some complexity to the program and use resources -- there has to be a better way to do it.
Is the delay possibly due to attempting to play a HUGE sound file ? Is the media file previously loaded into memory prior to this request to render it ?
Try to load the audio media into a buffer then upon a play signal perform the render directly from the buffer
I'm trying to write something that catches the audio being played to the speakers/headphones/soundcard and see whether it is playing and what the longest silence is. This is to test an application that is being executed and to see if it stops playing audio after a certain point, as such i don't actually need to really know what the audio itself is, just whether or not there is audio playing.
I need this to be fully programmatic (so not requiring the use of GUI tools or the like, to set up an environment). I know applications like projectM do this, I just can't for the life of me find anything anywhere that denotes how.
An audio level meter would also work for this, as would ossiliscope data or the like, really would take any recommendation.
Here is a very similar question: record output sound in python
You could try to route your output to a new device with jack and record this with portaudio. There are Python Bindings for portaudio called pyaudio and for jack called PyJack. I have never used the latter one but pyaudio works great.
I would like to change i2c bus frequency in order to allow for slightly longer cables.
I am using python-smbus package and it does work very well, however, I am unable to find how to set the bus frequency.
I have looked through the docs but was unable to find anything even remotely related to setting bus parameters.
Is that anything that could be done in python or do I need something lower level?
I am using Raspberry PI, which is an ARM architecture.
On the Raspberry Pi with the latest Jessie image, you can use this to check the current I2C frequency:
sudo cat /sys/module/i2c_bcm2708/parameters/baudrate.
To change the frequency, you can add/change this parameter:
dtparam=i2c_baudrate=50000
(replace 50000 with the desired frequency) in:
/boot/config.txt
and reboot to change the frequency.
You'll have to do something at a lower level. Typically this stuff is setup by the board file in the kernel. I didn't see anything specifically being done with the i2c, other than allocating the resources, so it's likely just using the default clock divisor. If you look on page 28 of the datasheet, you'll see that the default is 0x5dc. You'll need to setup that register with a different value (probably bigger) to cope with the longer cables.
I have now spent some significant amount of time researching all the options. It turns out that there are indeed low lever registers as specified in the other post, however, the Raspberry-Pi's driver resets their value on its every use, making any modification to them pretty much useless. The solution is to either write a custom i2c driver or simply wait for an updated version.
Some lower-level information could be found in byval forum.