I have both Fruityloops and Propellerheads Reason software synths on my Windows PC.
Any way I can get at and script these from either Visual Basic or Python? Or at least send Midi messages to the synths from code?
Update : attempts to use something like a "midi-mapper" (thanks for link MusiGenesis) don't seem to work. I don't think Reason or FL Studio act like standard GM Midi synths.
Update 2 : If you're interested in this question, check out this too.
Both applications support MIDI. It's just that they don't see each other.
In order to send messages via MIDI between applications, you need to install a virtual midi port.
There are several freely available, but this one works: http://www.midiox.com/zip/MidiYokeSetup.msi
You'll get a virtual MIDI output port that you can write to as if it's a normal MIDI device. In Fruity Loops or Rebirth you choose that port as the input. That's all you need to do to connect the programs.
It'll work like this:
Your Application --> Virtual MIDI Port --> FruityLoops
Note: This answer doesn't exactly answer the question you asked but it might achieve the result you want :)
You can author a VST plugin in Java using jVSTWrapper (http://jvstwrapper.sourceforge.net/). If you really wanted to use Python you could use Jython to interface to java and do it that way. Alternatively you could just write the plugin in Java or another scripting language for the JVM like Groovy.
I think both FL Studio and Reason can be configured as the default MIDI playback device. To send MIDI messages to either from VB.NET, you'll need to PInvoke the midiOutOpen, midiOutShortMsg and midiOutClose API calls. Here's a link to code samples:
http://www.answers.com/topic/midioutopen
They're for VB6, but they should be easy to translate to VB.NET.
I know FL Studio can be "driven" from a plugin authored for FL (or a VSTx plugin), but I think these are always written in C or C++.
Edit: I just learned that Windows Vista dropped the MIDI Mapper (which would have made setting up FL or Reason as the default MIDI device simple). Amazing. Here is a link I found with an alternative solution:
http://akkordwechsel.de/15-windows-vista-und-der-midi-mapper/
I just tried it out (it's just a *.CPL file that you double-click to run) and it appears to work (although the GM Synth is the only option available on my laptop, so I'm not sure if it will pick up FL or Reason as choices).
What you need is a VST MIDI scripter / scripting plugin to create a logic of MIDI events that can be sent to any MIDI channel. You would need to set a MIDI channel in FL for the VST instrument/effect you need to tweak its values. Google for it there are some plugins around and please share them back here if you find anything useful :)
You could write a Rewire host. Though, you will have to get a license (the license is free, but your application must be proprietary, so no open source).
Alternatively, you could interface through MIDI messages.
Finally, you could implement a dummy audio device which would route the audio to/from wherever you want or process it in some way.
I imagine all of these would be reasonably difficult. MIDI is probably the easiest of the three (I have no idea how easy or hard the Rewire protocol is to use).
When it comes to Reason, you can do with it to much because of it's closed architecture - you can use VST plugins (or any other type like DirectX ones) - your only option is to use MIDI.
Regarding Fruity Loops, you could write a VST plugin that can take an input from a scripting language (VB, Python or whatever) but in order to write such thing you would have to use Delphi or C++.
Alternatively, you can check out MAX made by Cycling74 - it's something like a IDE for music ;-) - and I'm pretty sure you can use Python with it.
There's an opensource music workstation, called Frinika, and you can script that in Javascript. (Insert / delete notes , change midi effects like pitch wheel etc.) It can import / export regular midi files, so it will work with Fruity loops or whatever else you have.
// Insert New
song.newLane("MyMidiLane", type("Midi"));
lane = song.getLane("MyMidiLane");
part = lane.newPart( time("10.0:000"), time("4.0:000") );
part.insertNote(note("c#3"), time("11.2:000"), time("2:0"), 120 );
part.insertNote(note("f3"), time("11.3:000"), time("1:0"), 100 );
part.insertNote(note("g#3"), time("11.3:000"), time("1:0"), 100 );
part.insertNote(note("b3"), time("11.3:000"), time("0:64"), 100 );
part.removeNote(note("f3"), time("11.3:000"));
part = song.newLane("MyTextLane",
type("Text")).newPart(time("24.0:000"), time("10.0:000"));
part.text = "This is the test text to be inserted.";
part.lane.parts[0].remove(); // remove initially inserted text-part
Another example for reading/changing notes:
lane = song.getLane("MyMidiLane");
// a lane has a fixed instrument assigned
lane.parts[0].notes[0].duration=64
lane.parts[0].notes[1].duration=32
lane.parts[0].notes[1].startTick=120
// Parts are blocks of notes that you can drag around together in the Frinika GUI.
// They're like patterns in trackers.
for (i in lane.parts[0].notes){
println("i: "+i+", n: "+noteName(lane.parts[0].notes[i].note));
println("i: "+i+", dur: "+lane.parts[0].notes[i].duration);
println("i: "+i+", startT: "+lane.parts[0].notes[i].startTick);
}
http://frinika.appspot.com/
It has a Java Webstart launcher as well, so you don't even have to
install.
It used to bundle the Javadoc documentation as well, but for some
reason their latest downloads don't include that. It's a pity, because
that's where the Javascript bindings are documented. So, now you have
to browse the source or build the Javadoc yourself. (It has some built-in examples that are accessible from the scripting window, you should check them out first. My first example is from there.)
Here is the sourcefile where you'll find the Javascript docs:
frinika Javascript doc/source
But there are other options as well. You can check out mingus too, which is a Python library for music theory and midi file handling. It requires Fluidsynth, and the demo apps require GamePython too, so it's a bit more complicated to setup than Frinika.
P.S.:
Frinika has a particular bug: when dragging around neighbouring notes, some might not sound the right length. You can help that by transposing forth and back the consecutive notes (fairly fast in piano roll view), or dragging the part that contains the notes forth and back. Restarting Frinika will also help, but that's the slower way. So this bug won't affect saved files, neither midi export.
Related
I've got a Brother ADS-1000W receipt scanner and using the ControlCenter4 software it works great. However I would like to be able automate the scanning process and I can't find any pointers/clues on where to get access to the ADS-1000W specific features. With the ControlCenter4 software, I can have the scanner deskew images. It also scans to an arbitrary length and width (matching the scanned receipt). I'm assuming this is being handled by the scanner, but it may be happening in the ControlCenter4 software. These features specifically don't seem to be accessible in the TWAIN interface. I tried using TWAINCommander 3 and it doesn't show the deskew and arbitrary size features in the TWAIN interface.
I've got both Linux and Windows machines available and I'm cool with a commandline solution or an SDK that I have to write software to implement. If it's an SDK, I prefer Python.
I know this is somewhat open-ended, but hoping someone can point in a direction for further research.
Just in case anyone else stumbles across this question: The ADS-1000W supports scanning to FTP. If PDF is selected, then the scanner supports the de-skew option, so I installed vsftpd on my linux box and used Python to process the files as their uploaded.
I have a robotics type project with an Arduino Uno, and to make a long story short, I am experimenting with some AI algorithms. However, I need to implement some high level matrix algorithms that would be quite simple using NumPy/SciPy, but they are an utter nightmare in C or C++. Even with the libraries out there, this is just getting ridiculous.
Is there any way I can do this project in Python? I think I heard something about the Mega having this capability, but I have an Uno, and replacing it is not an option at this point (that would set the project back quite a bit.) Also, I heard somethings about using Python to communicate to the Arduino via USB, but I cannot have the USB cable in while the thing is running. I need to be able to upload the program and be done with it.
Are there any options out there, or have I just reached a dead end?
There was a talk about using Python with robotics at this years PyConAU called Ah! I see you have the machine that goes 'BING'! by Dr. Graeme Cross.
The only option he recommended for using Python on a microcontroller board was PyMite which I think also goes by the name of Python-On-A-Chip.
It has been ported to a range of boards - specifically he mentions the Arduino Mega which you said is not an option for you, but it is possible it is supported on other Arduino boards.
However, because it is a "batteries not included" version of Python it is more than likely that you will have a real problem getting numpy/scipy etc up and running.
As other posters have suggested, implementing in C might be the path of least resistence.
Update: again, not specifically for Arduino, but pyMCU looks to provide python on a chip. The author states he may look at developing an Arduino version of pyMCU if there is enough interest.
I've started work on a "Little Python" to C++ (called Pyxie - a play on Py CC- Pyc-C) compiler, with the specific aim of compiling a sane subset of python to C++ such that it can run on an arduino.
This is far from complete at time of writing (0.0.16), but it can currently compile a very small subset of python - enough for the arduino "blink" example to run. To support this, it has a compilation profile - which essentially means "compile using the arduino toolchain."
A program it can compile looks like this:
led = 13
pinMode(led, OUTPUT)
while True:
digitalWrite(led, HIGH)
delay(1000)
digitalWrite(led, LOW)
delay(1000)
This parses, performs analysis (like type inference, etc), compiles to C++, which is then compiled to a hex file, which you can load onto your device.
There's a long way to go before it's useful, but it is progressing and does have a roadmap/etc.
PyPI - http://pypi.python.org/pypi/pyxie
Homepage - http://www.sparkslabs.com/pyxie/index.html
In particular a key difference from Micropython (and PyMite) is that it's designed to compile to devices too small to run either implementation. (This also means it's very different from things like ShedSkin which while a Python to C++ compiler target larger execution environments)
It's going to be difficult to get any kind of Python script running directly on the Arduino uno.Reason is that it is an interpreted language, so you will need a interpreter on-board in addition to the plain text script. There is probably not going to be enough memory for all of thatin arduino uno.
What you can do best is to find a way to compile a Python script to native machine code (this is how C/C++ works). I have seen projects around to do something like that for other platforms, but (as far as I know) none which does it successfully for Arduino uno yet.
you can visit http://www.toptechboy.com/using-python-with-arduino-lessons/ for more.
wish this will help you.
thanks!
This is not a direct solution but in your circumstances If I were you, I would write the AI program on my computer and the rest of it in Arduino. after that I would write a flask server with my AI program. and then, port forward from my router to the local machine. finally, make requests from Arduino to the server.
I would like to know if there are any API's for python to programmatically control a phone, like starting and ending calls, but also to record conversations.
I would also like to use the Headphones and Mic of the computer to talk over the phone.
Any info would be great, I tried googling for something, but nothing useful came up.
Be careful when using PyBluez! The results will actually depend on the BT-USB dongle you are using. Depending on the hardware(the BT chip in there), PyBluez will use one or another BT stack - for example there was one from WIDCOMM. Results will vary, as PyBluez is actually wrapping around those stacks - all of which are far from complete.
So, when you have a working project, be sure to know what actual BT stack you were using :)
For Python audio stuff, you could try this.
PyBluez is an effort to create python wrappers around system Bluetooth resources to allow Python developers to easily and quickly create Bluetooth applications.
Unfortunately I've not found a page dedicated to its features, but it could be a good starting point, whether everything you need is in its feature set, or if you could build your application upon it by extending it.
http://code.google.com/p/pybluez/
I have a Panatone Huey, a monitor calibration probe (device you attach to the monitor, and it gives you colour readings) - I want to get readings from the device in Python.
Having never written such a device driver before, I'm not sure where to start.
I've found are two open-source C/C++ projects that interface with the Heuy - ArgyllCMS and mcalib.
ArgyllCMS comes with a spotread command which returns readings from the device, although it only functions as an interactive command line tool, so running it via subprocess will not (easily) work.
The code ArgyllCMS uses to communicate with the device is in spectro/huey.c
Not tried it (only just found it while writing this question), but mcalib contains much less code, mainly just heuy.cpp - however it has a worrying number of FIXME comments and incomplete methods, and the code appears to have been automatically generated (unhelpful variable names)
There seems to be three options:
Modify spotread to work without any interactive prompts, call it via subprocess
Create a C-based Python module around huey.c or huey.cpp
Re-implement the interface using something like PyUSB
Being much more familiar with Python, I'm tempted to use PyUSB, but will this be substantially more work than wrapping existing code with the Python C API? Is there anything obvious in either of the C implementations that will not be easily doable in PyUSB?
Given the existence of spotread the easiest (though perhaps not the best) way to proceed would be to use pexpect. It allows you to interact with other command-line programs.
When an audio or midi clip is played (triggered), its name needs to be sent using OSC to another application.
LiveAPI is an interface which allows one to explore and automate Ableton Live using python scripts.
The code to do this must be written in a python script, which must be placed in a specific folder where Ableton Live can find it, selected in Live's Preferences.
More information about the LiveAPI can be found on these sites:
http://www.assembla.com/wiki/show/live-api
http://groups.google.com/group/liveapi
According to the LiveAPI documentation, the Clip object has a "name" attribute which holds the clip name. Presumably that's what you want to send in your OSC packets.
Also, it's worth mentioning that the Max/MSP support in Live8 will probably be a lot more comfortable to work with than LiveAPI, which is pretty much a dead project. Max/MSP supposedly has OSC support, which was added to support the JazzMutant Lemur, but I'm not sure how much of that made it into Live. Anyways, it's worth keeping in mind for when Live8 is released.
I know about Max 4 Live, but as I see it, it's kind of a different thing. Yes, it will probably be able to interface with Live to do all the stuff which people do now with LiveAPI. Some even think that M4L may not even go through LiveAPI, and use some internal interface instead (since Ableton and Cycling 74 are developing it together). From the promo videos on ableton.com site I think that M4L will mostly be about making and modifying sound, and not so much about controlling/reading other instruments, effects, clips etc.
I would not say that LiveAPI project is dead, because a lot of hardware MIDI controllers rely on LiveAPI to do some auto-mapping magic. When you look at the MIDI Remote Scripts folder in Live, you'll see that each controller has it's own folder with a python script. So I definitely think that LiveAPI is going to stay, and that this door into Live will remain open. They even created a new folder called Framework which contains some newer code, probably required for the new Akai controller to work with Live (that is what people believe in theory).
The application I plan to use the playing clip's name is called vvvv, so I don't want to have to bring Max into this, because it is not really needed.
I had some success with someone's modification of the original LiveAPI code, but only worked when I request all the clips' names, not when I asked for just a single one. I didn't have time to play with it later, and the thing for which I was preparing this has passed. I plan to work that out eventually, but it's not that urgent anymore.