Converting piano roll to MIDI in music21? - python

I am using music21 for handling MIDI and mXML files and converting them to a piano roll I am using in my project.
My piano roll is made up of sequence of 88-dimensional vectors where each element in a vector represents one pitch. One vector is one time step that can be 16th, 8th, 4th, and so on. Elements can obtain three values {0, 1, 2}. 0 means note is off. 1 means note is on. 2 means also that note is on but it always follows 1 - that is how I distinguish multiple key presses of same note. E.g., let time step be 8th and these two pitches be C and E:
[0 0 0 ... 1 0 0 0 1 ... 0]
[0 0 0 ... 1 0 0 0 1 ... 0]
[0 0 0 ... 2 0 0 0 2 ... 0]
[0 0 0 ... 2 0 0 0 2 ... 0]
[0 0 0 ... 1 0 0 0 0 ... 0]
[0 0 0 ... 1 0 0 0 0 ... 0]
We see that C and E are simultaneously played for quarter note, then again for quarter note, and we end with a C that lasts quarter note.
Right now, I am creating Stream() for every note and fill it as notes come. That gives me 88 streams and when I convert that to MIDI, and open that MIDI with MuseScore, that leaves me with a mess that is not readable.
My question is, is there some nicer way to transform this kind of piano roll to MIDI? Some algorithm, or idea which I could use would be appreciated.

In my opinion music21 is a very good library but too high-level for
this job. There is no such thing as streams, quarter notes or chords
in MIDI -- only messages. Try the
Mido library instead. Here
is sample code:
from mido import Message, MidiFile, MidiTrack
def stop_note(note, time):
return Message('note_off', note = note,
velocity = 0, time = time)
def start_note(note, time):
return Message('note_on', note = note,
velocity = 127, time = time)
def roll_to_track(roll):
delta = 0
# State of the notes in the roll.
notes = [False] * len(roll[0])
# MIDI note for first column.
midi_base = 60
for row in roll:
for i, col in enumerate(row):
note = midi_base + i
if col == 1:
if notes[i]:
# First stop the ringing note
yield stop_note(note, delta)
delta = 0
yield start_note(note, delta)
delta = 0
notes[i] = True
elif col == 0:
if notes[i]:
# Stop the ringing note
yield stop_note(note, delta)
delta = 0
notes[i] = False
# ms per row
delta += 500
roll = [[0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 0, 2, 0, 0, 0, 2, 0],
[0, 1, 0, 2, 0, 0, 0, 2, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0]]
midi = MidiFile(type = 1)
midi.tracks.append(MidiTrack(roll_to_track(roll)))
midi.save('test.mid')

Related

Insert value in numpy array with conditions

I want to insert the value in the NumPy array as follows,
If Nth row is the same as (N-1)th row insert 1 for Nth row and (N-1)th row and rest 0
If Nth row is different from (N_1)th row then change column and repeat condition
Here is the example
d = {'col1': [2,2, 3,3,3, 4,4, 5,5,5,],
'col2': [3,3, 4,4,4, 1,1, 0,0,0]}
df = pd.DataFrame(data=d)
np.zeros((10,4))
###########################################################
OUTPUT MATRIX
1 0 0 0 First two rows are the same so 1,1 in a first column
1 0 0 0
0 1 0 0 Three-rows are same 1,1,1
0 1 0 0
0 1 0 0
0 0 1 0 Again two rows are the same 1,1
0 0 1 0
0 0 0 1 Again three rows are same 1,1,1
0 0 0 1
0 0 0 1
IIUC, you can achieve this simply with numpy indexing:
# group by successive identical values
group = df.ne(df.shift()).all(1).cumsum().sub(1)
# craft the numpy array
a = np.zeros((len(group), group.max()+1), dtype=int)
a[np.arange(len(df)), group] = 1
print(a)
Alternative with numpy.identity:
# group by successive identical values
group = df.ne(df.shift()).all(1).cumsum().sub(1)
shape = df.groupby(group).size()
# craft the numpy array
a = np.repeat(np.identity(len(shape), dtype=int), shape, axis=0)
print(a)
output:
array([[1, 0, 0, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1]])
intermediates:
group
0 0
1 0
2 1
3 1
4 1
5 2
6 2
7 3
8 3
9 3
dtype: int64
shape
0 2
1 3
2 2
3 3
dtype: int64
other option
for fun, likely no so efficient on large inputs:
a = pd.get_dummies(df.agg(tuple, axis=1)).to_numpy()
Note that this second option uses groups of identical values, not successive identical values. For identical values with the first (numpy) approach, you would need to use group = df.groupby(list(df)).ngroup() and the numpy indexing option (this wouldn't work with repeating the identity).

Is there a way to simplify the creation of all possible (length x height) grids?

Here's my code for a 4x4 grid to better explain my problem:
#The "Duct-Tape" solution
for box0 in range(0,2):
for box1 in range(0,2):
for box2 in range(0,2):
for box3 in range(0,2):
for box4 in range(0,2):
for box5 in range(0,2):
for box6 in range(0,2):
for box7 in range(0,2): #0 = OutBag, 1 = InBag
for box8 in range(0,2):
for box9 in range(0,2):
for box10 in range(0,2):
for box11 in range(0,2):
for box12 in range(0,2):
for box13 in range(0,2):
for box14 in range(0,2):
for box15 in range(0,2):
totalGrids.append([[box0,box1,box2,box3],
[box4,box5,box6,box7],
[box8,box9,box10,box11],
[box12,box13,box14,box15]])
What's a way to make something like this for a length x height size grid?
This is another way to do it with fewer for loops by using binary arithmetic:
totalGrids = []
for i in range(0, 1 << 16):
totalGrids.append(
[
[(i >> j) & 1 for j in range(0, 4)],
[(i >> j) & 1 for j in range(4, 8)],
[(i >> j) & 1 for j in range(8, 12)],
[(i >> j) & 1 for j in range(12, 16)]
])
print(totalGrids[0])
print(totalGrids[1])
print(totalGrids[2])
print()
print(totalGrids[-3])
print(totalGrids[-2])
print(totalGrids[-1])
Output (first 3 and last 3 elements):
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
[[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
[[0, 1, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
[[1, 0, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
[[0, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
To generalize this from 4 x 4 to height x width, something like this should work:
height = 3
width = 5
totalGrids = []
for i in range(0, 1 << (height * width)):
totalGrids.append(
[[(i >> j) & 1 for j in range(k * width, (k + 1) * width)] for k in range(0, height)]
)
Here is an explanation of the above.
The matrix, which has height x width elements, is to be filled with every possible combination of 0s and 1s across these elements. As an example, if height = 2 and width = 4, then there are 8 elements in total, and one ordering of the required combinations of 0s and 1s is:
0 0 0 0 0 0 0 0 (this is 0 in binary)
0 0 0 0 0 0 0 1 (this is 1 in binary)
0 0 0 0 0 0 1 0 (this is 2 in binary)
0 0 0 0 0 0 1 1 (this is 3 in binary)
...
0 0 0 0 1 1 1 1 (this is 15 in binary)
0 0 0 1 0 0 0 0 (this is 16 in binary)
0 0 0 1 0 0 0 1
0 0 0 1 0 0 1 0
0 0 0 1 0 0 1 1 (EXAMPLE VALUE USED BELOW)
...
0 0 1 0 0 0 0 0 (this is 32 in binary)
...
0 0 1 1 0 0 0 0 (this is 48 in binary)
...
1 1 1 1 1 1 1 1 (this is 255 = 2**8 - 1 in binary)
These are just the binary values from 0 to 2**8 - 1 which can be expressed as Python integers in range(0, 2**8). They are exactly what is needed, and now the only question is how to populate a Python list of lists of size height x width.
The answer is to use binary arithmetic. Let's look at 0 0 0 1 0 0 1 1 as an example. We can specify this in Python as an integer, namely i = 19.
For the 1st slot of 8, we want to use the rightmost binary bit in our example, which is 1. We can extract this using Python's bitwise & operation by taking value = i & 1. Applying & 1 to any integer effectively masks off all but the binary ones-place digit.
For the 2nd slot, we need to add an additional step:
First we slide the bits to the right by 1 position (allowing the rightmost bit to fall off the edge, which is fine since we have already processed it and won't need it again) using Python's right shift operation >> as follows: value = i >> 1. In binary, this yields 0 0 0 0 1 0 0 1, which is the integer 9. The right-shift operator has moved the bit that was in the binary twos-place rightward into the binary ones-place.
Next, we can use the same technique as we did for the 1st slot to mask off all but the ones-place bit: value = i & 1.
Rather than do the above as two separate statements, we can simply write: value = (i >> 1) & 1.
In general, for the j'th slot, we can extract the j'th bit from our example integer by writing: value = (i >> j) & 1.
Now let's look at the key logic within the loop:
[[(i >> j) & 1 for j in range(k * width, (k + 1) * width)] for k in range(0, height)]
This uses a nested list comprehension to loop first over k in range(0, height) and then over j in range(k * width, (k + 1) * width), and to put the result of the above bitwise expression (i >> j) & 1 into each successive element in our matrix (or list of lists).
Finally, let's look again at the very outer loop in the code:
for i in range(0, 1 << (height * width)):
This uses Python's bitwise left shift operation <<, which does the opposite of what right shift (>>) does, namely to shift the bits of 1 to the left by (height * width) binary positions. Because each shift to the left causes a number to double in value, our left shift expression gives the same result as 2 ** (height * width), which is exactly the number of 0/1 combinations that your question is seeking.
So, by iterating from 0 to 2 ** (height * width), then extracting and collating the bits of each value into the corresponding matrix elements for that iteration's matrix, and appending that matrix to the totalGrids variable, we ultimately construct a list of matrices with the required properties.

Get first number each block of duplicates numbers in a list of 0 and 1

I have a list that looks like this:
a = [0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0...]
How do I get the index of the first 1 in each block of zero - one so the resulting index is:
[8 23 ..] and so on
I've been using this code:
def find_one (a):
for i in range(len(a)):
if (a[i] > 0):
return i
print(find_one(a))
but it gives me only the first occurrence of 1. How can implement it to iterate trough the entire list?
Thank you!!
You can do it using zip and al list comprehension:
a = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0]
r = [i for n,(i,v) in zip([1]+a,enumerate(a)) if v > n]
print(r) # [8,23]
Since you tagged pandas, can use groupby. If s = pd.Series(a) then
>>> x = s.groupby(s.diff().ne(0).cumsum()).head(1).astype(bool)
>>> x[x].index
Int64Index([8, 23], dtype='int64')
Without pandas:
b = a[1:]
[(num+1) for num,i in enumerate(zip(a,b)) if i == (0,1)]
# `state` is (prev_char, cur_char)
# where `prev_char` is the previous character seen
# and `cur_char` is the current character
#
#
# (0, 1) .... previous was "0"
# current is "1"
# RECORD THE INDEX.
# STRING OF ONES JUST BEGAN
#
# (0, 0) .... previous was "0"
# current is "0"
# do **NOT** reccord the index
#
# (1, 1) .... previous was "1"
# current is "1"
# we are in a string of ones, but
# not the begining of it.
# do **NOT** reccord the index.
#
# (1, 0).... previous was "1"
# current is "0"
# string of ones, just ended
# not the start of a string of ones.
# do **NOT** reccord the index.
state_to_print_decision = dict()
state_to_print_decision[(0, 1)] = True
def find_one (a, state_to_print_decision):
#
# pretend we just saw a bunch of zeros
# initilize state to (0, 0)
state = (0, 0)
for i in range(len(a)):
#
# a[i] is current character
#
# state[0] is the left element of state
#
# state[1] is the right elemet of state
#
# state[1] was current character,
# is now previous character
#
state = (state[1], a[i])
it_is_time_to_print = state_to_print_decision.get(state, False)
if(it_is_time_to_print):
indicies.append()
return indicies
a = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0]
print(find_one(a, state_to_print_decision))

How to Generate Adjacent Indecies w/ NumPy

So I'm trying to generate a list of possible adjacent movements within a 3d array (preferebly n-dimensional).
What I have works as it's supposed to, but I was wondering if there's a more numpythonic way to do so.
def adjacents(loc, bounds):
adj = []
bounds = np.array(bounds) - 1
if loc[0] > 0:
adj.append((-1, 0, 0))
if loc[1] > 0:
adj.append((0, -1, 0))
if loc[2] > 0:
adj.append((0, 0, -1))
if loc[0] < bounds[0]:
adj.append((1, 0, 0))
if loc[1] < bounds[1]:
adj.append((0, 1, 0))
if loc[2] < bounds[2]:
adj.append((0, 0, 1))
return np.array(adj)
Here are some example outputs:
adjacents((0, 0, 0), (10, 10, 10))
= [[1 0 0]
[0 1 0]
[0 0 1]]
adjacents((9, 9, 9), (10, 10, 10))
= [[-1 0 0]
[ 0 -1 0]
[ 0 0 -1]]
adjacents((5, 5, 5), (10, 10, 10))
= [[-1 0 0]
[ 0 -1 0]
[ 0 0 -1]
[ 1 0 0]
[ 0 1 0]
[ 0 0 1]]
Here's an alternative which is vectorized and uses a constant, prepopulated array:
# all possible moves
_moves = np.array([
[-1, 0, 0],
[ 0,-1, 0],
[ 0, 0,-1],
[ 1, 0, 0],
[ 0, 1, 0],
[ 0, 0, 1]])
def adjacents(loc, bounds):
loc = np.asarray(loc)
bounds = np.asarray(bounds)
mask = np.concatenate((loc > 0, loc < bounds - 1))
return _moves[mask]
This uses asarray() instead of array() because it avoids copying if the input happens to be an array already. Then mask is constructed as an array of six bools corresponding to the original six if conditions. Finally, the appropriate rows of the constant data _moves are returned.
But what about performance?
The vectorized approach above, while it will appeal to some, actually runs only half as fast as the original. If it's performance you're after, the best simple change you can make is to remove the line bounds = np.array(bounds) - 1 and subtract 1 inside each of the last three if conditions. That gives you a 2x speedup (because it avoids creating an unnecessary array).

Numpy: how to convert observations to probabilities?

I have a feature matrix and a corresponding targets, which are ones or zeroes:
# raw observations
features = np.array([[1, 1, 0],
[1, 1, 0],
[0, 1, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1]])
targets = np.array([1, 0, 1, 1, 0, 0])
As you can see, each feature may correspond to both ones and zeros. I need to convert my raw observation matrix to probability matrix, where each feature will correspond to the probability of seeing one as a target:
[1 1 0] -> 0.5
[0 1 0] -> 0.67
[0 0 1] -> 0
I have constructed a quite straight-forward solution:
import numpy as np
# raw observations
features = np.array([[1, 1, 0],
[1, 1, 0],
[0, 1, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1]])
targets = np.array([1, 0, 1, 1, 0, 0])
from collections import Counter
def convert_obs_to_proba(features, targets):
features_ = []
targets_ = []
# compute unique rows (idx will point to some representative)
b = np.ascontiguousarray(features).view(np.dtype((np.void, features.dtype.itemsize * features.shape[1])))
_, idx = np.unique(b, return_index=True)
idx = idx[::-1]
zeros = Counter()
ones = Counter()
# collect row-wise number of one and zero targets
for i, row in enumerate(features[:]):
if targets[i] == 0:
zeros[tuple(row)] += 1
else:
ones[tuple(row)] += 1
# iterate over unique features and compute probabilities
for k in idx:
unique_row = features[k]
zero_count = zeros[tuple(unique_row)]
one_count = ones[tuple(unique_row)]
proba = float(one_count) / float(zero_count + one_count)
features_.append(unique_row)
targets_.append(proba)
return np.array(features_), np.array(targets_)
features_, targets_ = convert_obs_to_proba(features, targets)
print(features_)
print(targets_)
which:
extracts unique features;
counts number of zero and one observations targets for each unique feature;
computes probability and constructs the result.
Could it be solved in a prettier way using some advanced numpy magic?
Update. Previous code was pretty inefficient O(n^2). Converted it to more performance-friendly. Old code:
import numpy as np
# raw observations
features = np.array([[1, 1, 0],
[1, 1, 0],
[0, 1, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1]])
targets = np.array([1, 0, 1, 1, 0, 0])
def convert_obs_to_proba(features, targets):
features_ = []
targets_ = []
# compute unique rows (idx will point to some representative)
b = np.ascontiguousarray(features).view(np.dtype((np.void, features.dtype.itemsize * features.shape[1])))
_, idx = np.unique(b, return_index=True)
idx = idx[::-1]
# calculate ZERO class occurences and ONE class occurences
for k in idx:
unique_row = features[k]
zeros = 0
ones = 0
for i, row in enumerate(features[:]):
if np.array_equal(row, unique_row):
if targets[i] == 0:
zeros += 1
else:
ones += 1
proba = float(ones) / float(zeros + ones)
features_.append(unique_row)
targets_.append(proba)
return np.array(features_), np.array(targets_)
features_, targets_ = convert_obs_to_proba(features, targets)
print(features_)
print(targets_)
It's easy using Pandas:
df = pd.DataFrame(features)
df['targets'] = targets
Now you have:
0 1 2 targets
0 1 1 0 1
1 1 1 0 0
2 0 1 0 1
3 0 1 0 1
4 0 1 0 0
5 0 0 1 0
Now, the fancy part:
df.groupby([0,1,2]).targets.mean()
Gives you:
0 1 2
0 0 1 0.000000
1 0 0.666667
1 1 0 0.500000
Name: targets, dtype: float64
Pandas doesn't print the 0 at the leftmost part of the 0.666 row, but if you inspect the value there, it is indeed 0.
np.sum(np.reshape([targets[f] if tuple(features[f])==tuple(i) else 0 for i in np.vstack(set(map(tuple,features))) for f in range(features.shape[0])],features.shape[::-1]),axis=1)/np.sum(np.reshape([1 if tuple(features[f])==tuple(i) else 0 for i in np.vstack(set(map(tuple,features))) for f in range(features.shape[0])],features.shape[::-1]),axis=1)
Here you go, numpy magic! Although unnecceserily so, this could probably be cleaned up using some boring variables ;)
(And this is probably far from optimal)

Categories

Resources