I need to play several videos (with the relative audios) under a python script because I need to insert specific pauses between them and triggers (for a neuropsychological experiment).
I tried many examples on the internet to implement the media player, like this one from here, but I didn`t manage to compile the library MPlayer.
Finally I just need to have videos controlled by python, either using an external program or not. The critical part is to find a way to implement playvideo() in the following code
for pairs in Videos_paired_to_play:
video1,video2 = pairs
send_trigger(triggerType1)
playvideo(video1[1] )
make_a_pause(2)
playvideo(video2[1])
send_trigger(triggerType2)
make_a_pause(20)
Ideally, I would need a way to play videos without destroying the window after having played them.
Thanks!
If you're on windows, you could tie into the windows media player COM object which will have a lot more options available to you
https://msdn.microsoft.com/en-us/library/windows/desktop/dd564035(v=vs.85).aspx
Alternatively, you could find another feature-complete player that exposes it's Interface to COM or python depending on OS. This would probably beat trying to hack your own, and the documentation would be there if you needed to extend or refactor it for other things
Related
I'm trying to write something that catches the audio being played to the speakers/headphones/soundcard and see whether it is playing and what the longest silence is. This is to test an application that is being executed and to see if it stops playing audio after a certain point, as such i don't actually need to really know what the audio itself is, just whether or not there is audio playing.
I need this to be fully programmatic (so not requiring the use of GUI tools or the like, to set up an environment). I know applications like projectM do this, I just can't for the life of me find anything anywhere that denotes how.
An audio level meter would also work for this, as would ossiliscope data or the like, really would take any recommendation.
Here is a very similar question: record output sound in python
You could try to route your output to a new device with jack and record this with portaudio. There are Python Bindings for portaudio called pyaudio and for jack called PyJack. I have never used the latter one but pyaudio works great.
An Application of mine retrieves the current playing song from a multitude of music players. However, I'm having great trouble implementing Zune and Windows Media Player.
I've done a lot of googling on the subject, unfortunately it's only confusing me more and more.
What I would normally do for my other applications:
Iterate over all open windows every 4 seconds
Get the title of all windows
Check title for a pattern (Ie, " - Spotify ")
If it's there, adjust the title for output.
WMP Does not have the current playing song in the title.
Zune does, but it's rotating every few seconds between title, album and artist. Which is heavily unreliable to track with my current method, albeit possible.
Windows Media Player
I've also tried using the COM component for windows media player.
import win32com.client
wmp = win32com.client.gencache.EnsureDispatch('WMPlayer.OCX')
# some function I don't have here, it retrieves the current playing song
# and other data
The big problem with that it requires you to start WMP programmatically, which would be extremely user unfriendly
So, what have I found? This SO post redirects to WMP.dll. But as far as I've read, it has the same problem as the COM, you have to start it programmatically. If not, I would really like some directions on how to work with that dll in python.
There would be another a little less hacky solution, which is to write a plugin for WMP, let my users download that plugin and retrieve the data from that plugin. I'd rather not go there, since I have no experience with any of the C languages, nor do I feel like digging into plugin documentations for this.
Zune
A method would be to cycle through the three title states, determine which state it's currently at and find the position of the other two.
IE:
First 5 seconds the title is: Super_song
Next 5 seconds the title is: By Power_artist
Next 5 seconds the title is: Good_album (date)
So I could determine when the album title is by making a regex for the date (which is always there) and then find the title and artist by waiting a few seconds.
This is obviously not a great solution, since it'll take a while and it's not very reliable either, (what if the song name contains a date for example)
The next problem is that it's not consistent either, sometimes the title just stays Zune for minutes long. No idea why.
So, move on to the next method.
There's this application called ZuneNowPlaying. This "somehow" gets the current playing song from Zune and puts it in the registry, this thing does not work with my sloppy title method, since it changes the registry the instant the song changes. Immediately.
This is the solution I had used in the working version of my program, but many users reported that it simply didn't work, nothing happened. And I checked the program and it doesn't reliably change the registry all the time. I don't know why, I don't know how to fix it. Therefor, this solution is also -scrapped-.
Is the fact that it is using the name "MsnMsgrUIManager"#000000">
causing the zune software to send it information about which song is
playing? Is there a way to get this information without this kind of
hack?
That is found in the discussion of the Zune Now Playing application. The source is not available unfortunately, at least I can't find it. Anyone got more on this?
Third method I had heard of was once again, a dll. ZuneShell.dll it's called. I don't remember where I read about it, nor can I find it via google, since all results are "Is ZuneShell.dll a virus?".
Once again, I run into the problem that I wouldn't know how to work with this even IF I had documentation on it, heck, if it's even what I have been looking for.
Alternate directions to maybe look into
While browsing about this subject, I've seen people talking about retrieving data directly from GUI's. I'm not sure how legit, possible or even how correct my memory of it is, but if it's possible could someone redirect me to more on this?
Anything else, really.
I have working code in C++ to print the name of media currently playing in WMP. It's a simple console application (78 lines of code).
Steps:
1) implements a basic COM object implementing IUnknown, IOleClientSite, IServiceProvider and IWMPRemoteMediaServices. This is straightforward (sort of, your mileage may vary) using the ATL template CComObjectRootEx. The only methods needing (simple) code are IServiceProvider::QueryService and IWMPRemoteMediaServices::GetServiceType. All other methods may return E_NOTIMPL
2) Instantiate the "WMPlayer.OCX" COM object (in my case, via CoCreateInstance)
3) Retrieve from the object an IOleObject interface pointer via QueryInterface
4) Instanciate an object from the class seen in 1) (I use the CComObject<>::CreateInstance template)
5) Use the SetClientSite method from the interface you got at 3), passing a pointer to your OleClientSite implementation.
6) During the SetClientSite call, WMP will callback you: fisrt asking for an IServiceProvider interface pointer, second calling the QueryService method, asking for an IWMPRemoteMediaServices interface pointer. Return your implementation of IWMPRemoteMediaServices and, third, you will be called again via GetServiceType. You must then return "Remote". You are now connected to the WMP running instance
7) Query the COM object for an IWMPMedia interface pointer
8) If 7) didn't gave NULL, read the the IWMPMedia::name property.
9) DONE
All the above was tested with VS2010 / Windows Seven, and with WMP running (if there is no Media Player process running, just do nothing).
I don't know if yoy can/want to implement COM interface and object in Python. If you are interested by my C++ code, let me know. You could use that code in a C++ DLL, and then call it from python.
I just found a cool Python tool which can query all the controls of any program.
Simple, straightforward, and easy to read. It's here:
http://www.brunningonline.net/simon/blog/archives/winGuiAuto.py.html
With that you can get the info from the GUI.
You can also grab the loaded file list. It works for most media player.
You can get this information programmatically like this:
http://www.codeproject.com/Articles/18975/Listing-Used-Files
This is C++, but at that point you can wrap the native code.
This way you have to extract the ID3 tags yourself. Might worth the shot as it would be an universal solution.
I would like to know if there are any API's for python to programmatically control a phone, like starting and ending calls, but also to record conversations.
I would also like to use the Headphones and Mic of the computer to talk over the phone.
Any info would be great, I tried googling for something, but nothing useful came up.
Be careful when using PyBluez! The results will actually depend on the BT-USB dongle you are using. Depending on the hardware(the BT chip in there), PyBluez will use one or another BT stack - for example there was one from WIDCOMM. Results will vary, as PyBluez is actually wrapping around those stacks - all of which are far from complete.
So, when you have a working project, be sure to know what actual BT stack you were using :)
For Python audio stuff, you could try this.
PyBluez is an effort to create python wrappers around system Bluetooth resources to allow Python developers to easily and quickly create Bluetooth applications.
Unfortunately I've not found a page dedicated to its features, but it could be a good starting point, whether everything you need is in its feature set, or if you could build your application upon it by extending it.
http://code.google.com/p/pybluez/
I have both Fruityloops and Propellerheads Reason software synths on my Windows PC.
Any way I can get at and script these from either Visual Basic or Python? Or at least send Midi messages to the synths from code?
Update : attempts to use something like a "midi-mapper" (thanks for link MusiGenesis) don't seem to work. I don't think Reason or FL Studio act like standard GM Midi synths.
Update 2 : If you're interested in this question, check out this too.
Both applications support MIDI. It's just that they don't see each other.
In order to send messages via MIDI between applications, you need to install a virtual midi port.
There are several freely available, but this one works: http://www.midiox.com/zip/MidiYokeSetup.msi
You'll get a virtual MIDI output port that you can write to as if it's a normal MIDI device. In Fruity Loops or Rebirth you choose that port as the input. That's all you need to do to connect the programs.
It'll work like this:
Your Application --> Virtual MIDI Port --> FruityLoops
Note: This answer doesn't exactly answer the question you asked but it might achieve the result you want :)
You can author a VST plugin in Java using jVSTWrapper (http://jvstwrapper.sourceforge.net/). If you really wanted to use Python you could use Jython to interface to java and do it that way. Alternatively you could just write the plugin in Java or another scripting language for the JVM like Groovy.
I think both FL Studio and Reason can be configured as the default MIDI playback device. To send MIDI messages to either from VB.NET, you'll need to PInvoke the midiOutOpen, midiOutShortMsg and midiOutClose API calls. Here's a link to code samples:
http://www.answers.com/topic/midioutopen
They're for VB6, but they should be easy to translate to VB.NET.
I know FL Studio can be "driven" from a plugin authored for FL (or a VSTx plugin), but I think these are always written in C or C++.
Edit: I just learned that Windows Vista dropped the MIDI Mapper (which would have made setting up FL or Reason as the default MIDI device simple). Amazing. Here is a link I found with an alternative solution:
http://akkordwechsel.de/15-windows-vista-und-der-midi-mapper/
I just tried it out (it's just a *.CPL file that you double-click to run) and it appears to work (although the GM Synth is the only option available on my laptop, so I'm not sure if it will pick up FL or Reason as choices).
What you need is a VST MIDI scripter / scripting plugin to create a logic of MIDI events that can be sent to any MIDI channel. You would need to set a MIDI channel in FL for the VST instrument/effect you need to tweak its values. Google for it there are some plugins around and please share them back here if you find anything useful :)
You could write a Rewire host. Though, you will have to get a license (the license is free, but your application must be proprietary, so no open source).
Alternatively, you could interface through MIDI messages.
Finally, you could implement a dummy audio device which would route the audio to/from wherever you want or process it in some way.
I imagine all of these would be reasonably difficult. MIDI is probably the easiest of the three (I have no idea how easy or hard the Rewire protocol is to use).
When it comes to Reason, you can do with it to much because of it's closed architecture - you can use VST plugins (or any other type like DirectX ones) - your only option is to use MIDI.
Regarding Fruity Loops, you could write a VST plugin that can take an input from a scripting language (VB, Python or whatever) but in order to write such thing you would have to use Delphi or C++.
Alternatively, you can check out MAX made by Cycling74 - it's something like a IDE for music ;-) - and I'm pretty sure you can use Python with it.
There's an opensource music workstation, called Frinika, and you can script that in Javascript. (Insert / delete notes , change midi effects like pitch wheel etc.) It can import / export regular midi files, so it will work with Fruity loops or whatever else you have.
// Insert New
song.newLane("MyMidiLane", type("Midi"));
lane = song.getLane("MyMidiLane");
part = lane.newPart( time("10.0:000"), time("4.0:000") );
part.insertNote(note("c#3"), time("11.2:000"), time("2:0"), 120 );
part.insertNote(note("f3"), time("11.3:000"), time("1:0"), 100 );
part.insertNote(note("g#3"), time("11.3:000"), time("1:0"), 100 );
part.insertNote(note("b3"), time("11.3:000"), time("0:64"), 100 );
part.removeNote(note("f3"), time("11.3:000"));
part = song.newLane("MyTextLane",
type("Text")).newPart(time("24.0:000"), time("10.0:000"));
part.text = "This is the test text to be inserted.";
part.lane.parts[0].remove(); // remove initially inserted text-part
Another example for reading/changing notes:
lane = song.getLane("MyMidiLane");
// a lane has a fixed instrument assigned
lane.parts[0].notes[0].duration=64
lane.parts[0].notes[1].duration=32
lane.parts[0].notes[1].startTick=120
// Parts are blocks of notes that you can drag around together in the Frinika GUI.
// They're like patterns in trackers.
for (i in lane.parts[0].notes){
println("i: "+i+", n: "+noteName(lane.parts[0].notes[i].note));
println("i: "+i+", dur: "+lane.parts[0].notes[i].duration);
println("i: "+i+", startT: "+lane.parts[0].notes[i].startTick);
}
http://frinika.appspot.com/
It has a Java Webstart launcher as well, so you don't even have to
install.
It used to bundle the Javadoc documentation as well, but for some
reason their latest downloads don't include that. It's a pity, because
that's where the Javascript bindings are documented. So, now you have
to browse the source or build the Javadoc yourself. (It has some built-in examples that are accessible from the scripting window, you should check them out first. My first example is from there.)
Here is the sourcefile where you'll find the Javascript docs:
frinika Javascript doc/source
But there are other options as well. You can check out mingus too, which is a Python library for music theory and midi file handling. It requires Fluidsynth, and the demo apps require GamePython too, so it's a bit more complicated to setup than Frinika.
P.S.:
Frinika has a particular bug: when dragging around neighbouring notes, some might not sound the right length. You can help that by transposing forth and back the consecutive notes (fairly fast in piano roll view), or dragging the part that contains the notes forth and back. Restarting Frinika will also help, but that's the slower way. So this bug won't affect saved files, neither midi export.
When an audio or midi clip is played (triggered), its name needs to be sent using OSC to another application.
LiveAPI is an interface which allows one to explore and automate Ableton Live using python scripts.
The code to do this must be written in a python script, which must be placed in a specific folder where Ableton Live can find it, selected in Live's Preferences.
More information about the LiveAPI can be found on these sites:
http://www.assembla.com/wiki/show/live-api
http://groups.google.com/group/liveapi
According to the LiveAPI documentation, the Clip object has a "name" attribute which holds the clip name. Presumably that's what you want to send in your OSC packets.
Also, it's worth mentioning that the Max/MSP support in Live8 will probably be a lot more comfortable to work with than LiveAPI, which is pretty much a dead project. Max/MSP supposedly has OSC support, which was added to support the JazzMutant Lemur, but I'm not sure how much of that made it into Live. Anyways, it's worth keeping in mind for when Live8 is released.
I know about Max 4 Live, but as I see it, it's kind of a different thing. Yes, it will probably be able to interface with Live to do all the stuff which people do now with LiveAPI. Some even think that M4L may not even go through LiveAPI, and use some internal interface instead (since Ableton and Cycling 74 are developing it together). From the promo videos on ableton.com site I think that M4L will mostly be about making and modifying sound, and not so much about controlling/reading other instruments, effects, clips etc.
I would not say that LiveAPI project is dead, because a lot of hardware MIDI controllers rely on LiveAPI to do some auto-mapping magic. When you look at the MIDI Remote Scripts folder in Live, you'll see that each controller has it's own folder with a python script. So I definitely think that LiveAPI is going to stay, and that this door into Live will remain open. They even created a new folder called Framework which contains some newer code, probably required for the new Akai controller to work with Live (that is what people believe in theory).
The application I plan to use the playing clip's name is called vvvv, so I don't want to have to bring Max into this, because it is not really needed.
I had some success with someone's modification of the original LiveAPI code, but only worked when I request all the clips' names, not when I asked for just a single one. I didn't have time to play with it later, and the thing for which I was preparing this has passed. I plan to work that out eventually, but it's not that urgent anymore.