I am writing a script to convert a picture into MIDI notes based on the RGBA values of the individual pixels. However, I cannot seem to get the last step working, which is to actually output the notes to a file.
I have tried using the MIDIUtil library, however its documentation is not the greatest and I can't seem to figure it out.
If anyone could tell me how to sequence the notes (so that they don't all begin at the beginning) it would be greatly appreciated.
Looking at the sample, something like
from midiutil.MidiFile import MIDIFile
# create your MIDI object
mf = MIDIFile(1) # only 1 track
track = 0 # the only track
time = 0 # start at the beginning
mf.addTrackName(track, time, "Sample Track")
mf.addTempo(track, time, 120)
# add some notes
channel = 0
volume = 100
pitch = 60 # C4 (middle C)
time = 0 # start on beat 0
duration = 1 # 1 beat long
mf.addNote(track, channel, pitch, time, duration, volume)
pitch = 64 # E4
time = 2 # start on beat 2
duration = 1 # 1 beat long
mf.addNote(track, channel, pitch, time, duration, volume)
pitch = 67 # G4
time = 4 # start on beat 4
duration = 1 # 1 beat long
mf.addNote(track, channel, pitch, time, duration, volume)
# write it to disk
with open("output.mid", 'wb') as outf:
mf.writeFile(outf)
I know this is an old post, but I'm the author of the library, and I wanted to mention that python 2 and 3 support have now been unified and with the demise of Google Code the code is now hosted on GitHub and can be installed via pip, ie:
pip install MIDIUtil
Documentation is available at Read The Docs.
(Tried to comment but I lacked the experience points.)
The end-of-track message is created automatically when the file is written to disk.
Related
Type 0 midi files (example) have all instruments crammed onto a single track.
Type 1 midi files have instruments separated out into distinct tracks.
Is there a good way to convert from type 0 to type 1? If there are any resources out there that can run this conversion, I'd love to hear about them!
Here's how I took care of this. The important insight was that channel in midi files designates the device port on which some info is sent (e.g. some midi keyboard input), while program is the instrument voice (e.g. sawtooth lead, cajun banjo...).
This means one can create a dictionary with one key per voice, and a list of values that contain notes played in that voice. Time should initially be stored in global coordinates (as in a type-0 file the relative time coordinates are expressed across all voices but we're now separating voices out into distinct lists of notes).
Then one can convert back to relative time units, store the bpm and time resolution values from the input type-0 track, and whoomp--there's your type 1 midi.
from collections import defaultdict
import mido, os
def subdivide_midi_tracks(path):
'''Convert a type 0 midi file to a type 1 midi file'''
m = mido.MidiFile(path) # load the original type-0 midi file
messages = [] # a list of message dicts, one per track
for track in m.tracks:
time = 0 # store time in aggregate units
track_messages = []
for idx, i in enumerate(track):
i = i.dict()
if i.get('time', None): time += i.get('time')
i['time'] = time
track_messages.append(i)
messages.append(track_messages)
# build a dictionary of the events for each channel
d = defaultdict(list) # d[channel_id] = [notes]
for track_idx, track in enumerate(messages):
for i in track:
channel = i.get('channel', -1)
d[channel].append(i)
# covert time units in each program back to relative units
for channel in d:
total_time = 0
for i in sorted(d[channel], key=lambda i: i['time']):
t = i['time']
i['time'] = t - total_time
total_time = t
# create a midi file and add a track for each channel
m2 = mido.MidiFile()
for channel in sorted(d.keys()):
track = mido.midifiles.tracks.MidiTrack()
# add the notes to this track
for note in d[channel]:
note_type = note['type']
del note['type']
# if this is a meta message, append a meta message else a messaege
try:
track.append(mido.MetaMessage(note_type, **note))
except:
track.append(mido.Message(note_type, **note))
m2.tracks.append(track)
# ensure the time quantization is the same in the new midi file
m2.ticks_per_beat = m.ticks_per_beat
return m2
m = midi.load("data/nes/'Bomb_Man's_Stage'_Mega_Man_I_by_Mark_Richardson.mid")
m2 = subdivide_midi_tracks(m.path)
m2.save('whoop.mid')
os.system('open whoop.mid')
I have an audio file and I want to split it every 2 seconds. Is there a way to do this with librosa?
So if I had a 60 seconds file, I would split it into 30 two second files.
librosa is first and foremost a library for audio analysis, not audio synthesis or processing. The support for writing simple audio files is given (see here), but it is also stated there:
This function is deprecated in librosa 0.7.0. It will be removed in 0.8. Usage of write_wav should be replaced by soundfile.write.
Given this information, I'd rather use a tool like sox to split audio files.
From "Split mp3 file to TIME sec each using SoX":
You can run SoX like this:
sox file_in.mp3 file_out.mp3 trim 0 2 : newfile : restart
It will create a series of files with a 2-second chunk of the audio each.
If you'd rather stay within Python, you might want to use pysox for the job.
You can split your file using librosa running the following code. I have added comments necessary so that you understand the steps carried out.
# First load the file
audio, sr = librosa.load(file_name)
# Get number of samples for 2 seconds; replace 2 by any number
buffer = 2 * sr
samples_total = len(audio)
samples_wrote = 0
counter = 1
while samples_wrote < samples_total:
#check if the buffer is not exceeding total samples
if buffer > (samples_total - samples_wrote):
buffer = samples_total - samples_wrote
block = audio[samples_wrote : (samples_wrote + buffer)]
out_filename = "split_" + str(counter) + "_" + file_name
# Write 2 second segment
librosa.output.write_wav(out_filename, block, sr)
counter += 1
samples_wrote += buffer
[Update]
librosa.output.write_wav() has been removed from librosa, so now we have to use soundfile.write()
Import required library
import soundfile as sf
replace
librosa.output.write_wav(out_filename, block, sr)
with
sf.write(out_filename, block, sr)
I was just playing around with sound input and output on a raspberry pi using python.
My plan was to read the input of a microphone, manipulate it and playback the manipulated audio. At the moment I tried to read and playback the audio.
The reading seems to work, since i wrote the read data into a wave file in the last step, and the wave file seemed fine.
But the playback is noise sounds only.
Playing the wave file worked as well, so the headset is fine.
I think maybe I got some problem in my settings or the output format.
The code:
import alsaaudio as audio
import time
import audioop
#Input & Output Settings
periodsize = 1024
audioformat = audio.PCM_FORMAT_FLOAT_LE
channels = 16
framerate=8000
#Input Device
inp = audio.PCM(audio.PCM_CAPTURE,audio.PCM_NONBLOCK,device='hw:1,0')
inp.setchannels(channels)
inp.setrate(framerate)
inp.setformat(audioformat)
inp.setperiodsize(periodsize)
#Output Device
out = audio.PCM(audio.PCM_PLAYBACK,device='hw:0,0')
out.setchannels(channels)
out.setrate(framerate)
out.setformat(audioformat)
out.setperiodsize(periodsize)
#Reading the Input
allData = bytearray()
count = 0
while True:
#reading the input into one long bytearray
l,data = inp.read()
for b in data:
allData.append(b)
#Just an ending condition
count += 1
if count == 4000:
break
time.sleep(.001)
#splitting the bytearray into period sized chunks
list1 = [allData[i:i+periodsize] for i in range(0, len(allData), periodsize)]
#Writing the output
for arr in list1:
# I tested writing the arr's to a wave file at this point
# and the wave file was fine
out.write(arr)
Edit: Maybe I should mention, that I am using python 3
I just found the answer. audioformat = audio.PCM_FORMAT_FLOAT_LE this format isn't the one used by my Headset (just copied and pasted it without a second thought).
I found out about my microphones format (and additional information) by running speaker-test in the console.
Since my speakers format is S16_LE the code works fine with audioformat = audio.PCM_FORMAT_S16_LE
consider using plughw (alsa subsystem supporting resampling/conversion) for the sink part of the chain at least:
#Output Device
out = audio.PCM(audio.PCM_PLAYBACK,device='plughw:0,0')
this should help to negotiate sampling rate as well as the data format.
periodsize is better to estimate based on 1/times of the sample rate like:
periodsize = framerate / 8 (8 = times for 8000 KHz sampling rate)
and sleeptime is better to estimate as a half of the time necessary to play periodsize:
sleeptime = 1.0 / 16 (1.0 - is a second, 16 = 2*times for 8000 KHz sampling rate)
I am trying to record from multiple microphones simultaneously using python. I require the recordings to almost exactly simultaneous as I am going to be cross correlating the audio signals using scipy in order to get a direction from which the sound came. So far, when I plot the results of my recorded sound on a graph, the recordings are out of sync even if the 2 mics are equidistant from the sound source. Here is my code:
import alsaaudio
import numpy
inp1 = alsaaudio.PCM(alsaaudio.PCM_CAPTURE,alsaaudio.PCM_NORMAL,'Set')
inp1.setchannels(1)
inp1.setrate(44100)
inp1.setformat(alsaaudio.PCM_FORMAT_S16_LE)
inp1.setperiodsize(1024)
inp2 = alsaaudio.PCM(alsaaudio.PCM_CAPTURE,alsaaudio.PCM_NORMAL,'Set_1')
inp2.setchannels(1)
inp2.setrate(44100)
inp2.setformat(alsaaudio.PCM_FORMAT_S16_LE)
inp2.setperiodsize(1024)
i = int(raw_input('How many samples of recording?'))
amp1 = []
amp2 = []
while i > 0:
l, data1 = inp1.read()
a = numpy.fromstring(data1, dtype='int16')
amp1.extend(abs(a))
l, data2 = inp2.read()
b = numpy.fromstring(data2, dtype='int16')
amp2.extend(abs(b))
i -= 1
This gives me my 2 audio signals as amp1 and amp2. I am a beginner when it comes to programming and I think perhaps there is a better way to do this...
To force two devices to start at the same time, the ALSA C API has snd_pcm_link.
This function is not exposed by pyalsaaudio.
How can I produce real-time audio output from music made with Music21. Failing that, how can i produce ANY audio output from music made with Music21 via open-source software? Thanks for the help.
As you've seen, music21 isn't designed to be a music playback system, but it IS designed to be embedded within other playback systems or to call them from within the system. We're not planning on putting too much work into playback systems (because of the hardware support, our being a tiny research lab, the work still needing to be done on musical analysis, etc.), but your solution is so elegant that it is now included in all versions of music21 (post v1.1) as the music21.midi.realtime module. Here's an example that takes music21's ability to dynamically allocate midi channels with different pitch-bend objects in order to simulate microtonal playback (a major problem for most midi playback):
# Set up a detuned piano
# (where each key has a random
# but consistent detuning from 30 cents flat to sharp)
# and play a Bach Chorale on it in real time.
from music21 import *
import random
keyDetune = []
for i in range(0, 127):
keyDetune.append(random.randint(-30, 30))
b = corpus.parse('bach/bwv66.6')
for n in b.flat.notes:
n.microtone = keyDetune[n.pitch.midi]
sp = midi.realtime.StreamPlayer(b)
sp.play()
The StreamPlayer's .play() function can also take busyFunction and busyArgs and busyWaitMilliseconds arguments which specify a function to call with arguments at most every busyWaitMilliseconds (could be more if your system is slower). There is also an endFunction and endArgs that will be called at the end, in case you want to set up some sort of threaded playback. -- Myke Cuthbert (Music21 creator)
So here's what I found out. Here's a python script that works on Windows XP. It needs pygame in addition to music21.
# genPlayM21Score.py Generates and Plays 2 Music21 Scores "on the fly".
#
# see way below for source notes
from music21 import *
# we create the music21 Bottom Part, and do this explicitly, one object at a time.
n1 = note.Note('e4')
n1.duration.type = 'whole'
n2 = note.Note('d4')
n2.duration.type = 'whole'
m1 = stream.Measure()
m2 = stream.Measure()
m1.append(n1)
m2.append(n2)
partLower = stream.Part()
partLower.append(m1)
partLower.append(m2)
# For the music21 Upper Part, we automate the note creation procedure
data1 = [('g4', 'quarter'), ('a4', 'quarter'), ('b4', 'quarter'), ('c#5', 'quarter')]
data2 = [('d5', 'whole')]
data = [data1, data2]
partUpper = stream.Part()
def makeUpperPart(data):
for mData in data:
m = stream.Measure()
for pitchName, durType in mData:
n = note.Note(pitchName)
n.duration.type = durType
m.append(n)
partUpper.append(m)
makeUpperPart(data)
# Now, we can add both Part objects into a music21 Score object.
sCadence = stream.Score()
sCadence.insert(0, partUpper)
sCadence.insert(0, partLower)
# Now, let's play the MIDI of the sCadence Score [from memory, ie no file write necessary] using pygame
import cStringIO
# for music21 <= v.1.2:
if hasattr(sCadence, 'midiFile'):
sCadence_mf = sCadence.midiFile
else: # for >= v.1.3:
sCadence_mf = midi.translate.streamToMidiFile(sCadence)
sCadence_mStr = sCadence_mf.writestr()
sCadence_mStrFile = cStringIO.StringIO(sCadence_mStr)
import pygame
freq = 44100 # audio CD quality
bitsize = -16 # unsigned 16 bit
channels = 2 # 1 is mono, 2 is stereo
buffer = 1024 # number of samples
pygame.mixer.init(freq, bitsize, channels, buffer)
# optional volume 0 to 1.0
pygame.mixer.music.set_volume(0.8)
def play_music(music_file):
"""
stream music with mixer.music module in blocking manner
this will stream the sound from disk while playing
"""
clock = pygame.time.Clock()
try:
pygame.mixer.music.load(music_file)
print "Music file %s loaded!" % music_file
except pygame.error:
print "File %s not found! (%s)" % (music_file, pygame.get_error())
return
pygame.mixer.music.play()
while pygame.mixer.music.get_busy():
# check if playback has finished
clock.tick(30)
# play the midi file we just saved
play_music(sCadence_mStrFile)
#============================
# now let's make a new music21 Score by reversing the upperPart notes
data1.reverse()
data2 = [('d5', 'whole')]
data = [data1, data2]
partUpper = stream.Part()
makeUpperPart(data)
sCadence2 = stream.Score()
sCadence2.insert(0, partUpper)
sCadence2.insert(0, partLower)
# now let's play the new Score
sCadence2_mf = sCadence2.midiFile
sCadence2_mStr = sCadence2_mf.writestr()
sCadence2_mStrFile = cStringIO.StringIO(sCadence2_mStr)
play_music(sCadence2_mStrFile)
## SOURCE NOTES
## There are 3 sources for this mashup:
# 1. Source for the Music21 Score Creation http://web.mit.edu/music21/doc/html/quickStart.html#creating-notes-measures-parts-and-scores
# 2. Source for the Music21 MidiFile Class Behaviour http://mit.edu/music21/doc/html/moduleMidiBase.html?highlight=midifile#music21.midi.base.MidiFile
# 3. Source for the pygame player: http://www.daniweb.com/software-development/python/code/216979/embed-and-play-midi-music-in-your-code-python